From tactics to strategy: the DevRel measurement gap

Of the 13 DevRel leaders I interviewed for my MBA dissertation, only two could clearly demonstrate a coherent link between tactical activity and organisational strategy. In this post, I talk through how focusing on the developer journey can help bridge that gap.

2026-03-22 · 15 min read
Part of: The Strategic Case for Developer Relations · Post 7 of 8
  1. 01 How does Developer Relations (DevRel) create value? What 13 interviews revealed.
  2. 02 Developer Relations is more than marketing. It's co-creation.
  3. 03 Developer experience: prerequisite and product of DevRel
  4. 04 The four pillars of DevRel (and the foundation they rest on)
  5. 05 Company context: the conditions that shape DevRel strategy
  6. 06 Why developer communities are not brand communities
  7. 07 From tactics to strategy: the DevRel measurement gap
  8. 08 The feedback loop: how DevRel bridges community and product

A pattern I see across DevRel is that teams can usually tell you what they did by using metrics like video views, blog traffic, event attendance, stars on GitHub repositories, community growth and Net Promoter Scores. The visible metrics are rarely the hard part.

The harder part is working out whether those metrics tell you anything meaningful about impact (tied back to product adoption and improving the developer experience). Many of us practitioners have known that for a long time. DevRel has plenty of vanity metrics: numbers that are easy to collect, easy to report, and easy to mistake for evidence that something important changed. That doesn’t make them useless, but they they don’t tell the full story on their own.

That challenge may feel familiar if you’re from a marketing background. Marketing teams have spent years trying to move beyond reach metrics and towards something that shows real influence. DevRel is running into the same question: not just whether people saw the work, but whether it changed anything that mattered.

Disclaimer: these reflections come from 13 interviews I ran for my MBA dissertation. All the people I spoke to were in technology companies, but if you’ve read any of the earlier post in the series, then you already know that!

I went into my dissertation research expecting measurement to be a challenge for DevRel teams. I’ve been in DevRel long enough to know these challenges (e.g. when a metric looks promising but you cannot quite explain what changed because of it). Even so, I wasn’t prepared for quite how universal the struggle was. The more I listened, the clearer it became that the problem was not a lack of numbers, but a lack of a story connecting those numbers to strategy.

The DevRel measurement gap in practice

Of the 13 DevRel leaders I interviewed, only two could clearly show how their day-to-day work connected to the organisation’s strategic goals. Admitedly, it’s a small sample that we can’t really conclude much from in a general sense (and I also mentioned the earlier sampling bias), but 2/13 is only 15%. It’s a signal that shows the shape of the problem. This was a small qualitative sample of experienced practitioners, not a census of the whole profession.

The other eleven respondents had no shortage of metrics. In several cases, they had extensive measurement infrastructure, detailed dashboards, and regular reporting cycles. But they didn’t have the thread from those metrics to anything the business cared about at a strategic level. They could tell you how many people watched their last tutorial, but couldn’t connect that back to the product adoption targets that the company was working towards.

Put another way, only two respondents could explain how their day-to-day metrics connected to the wider developer journey (whether they pictured that as a funnel, a flywheel, or something else). That is very different from simply reporting activity and hoping the meaning is obvious. One participant described the challenge with frustration:

We pick a metric; we change a metric. We pick a metric; we change a metric. That has been consistent throughout my DevRel career.

This isn’t a capacity problem, as teams were well-resourced at the time. The issue is that teams often fall back on activity metrics because they’re comfortable. They’re easier to collect and easier to report, even when they don’t explain the full picture of what the business is trying to change.

Chris, isn’t this just another argument for better reporting?

Not really. As I mentioned, many of these teams already had the reporting infrastructure and a regular reporting rhythms in place. And clearer reporting won’t save you if it only makes activity easier to count. The deeper problem is deciding what you think a metric is evidence of and why it even matters in the first place. I know that sounds philosophical (and I suppose in a way it is), but it routes back to the core question of ‘why does your Developer Relations team exist?’.

Equally, most of the teams I spoke to weren’t careless about metrics. The issue was that the work touched so many parts of the organisation that the measurement story kept drifting. The same participant shared:

Measurement has always been the trickiest part of DevRel because we’re touching so many parts of an organisation. It feels like it’s a distracted profession and hard to measure consistently.

Why DevRel tactical metrics are necessary but not sufficient

To be clear: I’m not arguing that tactical metrics do not matter. Blog traffic, video views, community engagement rates, tutorial completion and similar. These are indicators of real things that DevRel teams are doing, and how they’re performing. A DevRel team that cannot identify whether their content is being consumed, or whether their community is engaged cannot even begin to understand whether their work is having an impact. Those metrics are necessary, but not sufficient on their own.

The problem is treating tactical metrics as if they are the destination rather than the starting point. If a blog post gets 10,000 views, that may be genuinely useful. But what’s the baseline for that type of content? And what is it useful for? Did those views include the developers you were actually trying to reach? Did any of them go on to try the product, and did any of them come back? Did the content help someone move from different stages of the journey (e.g. Evaluate to Learn, or from Learn to Build)?

When you lack a framework that connects the tactical metric to something bigger, you can’t answer those questions. And if you can’t answer those questions, you stay stuck describing the motion instead of explaining the change. And it’s not just my dissertation sample saying this. In its announcement for the 11th Annual State of Developer Relations report, DevRel.Agency found that 61% of respondents still found it difficult to prove their influence. That is a much bigger sample pointing in the same direction.

This is why I keep coming back to DevRel.Agency’s metrics that matter framing:

  • Program goals answer the “why”
  • Activity metrics tell you “how”
  • Community metrics tell you whether the surrounding ecosystem is getting stronger

If you only report the middle layer, you end up with “motion without meaning”. Sean Falconer makes a similar point in DevRel metrics and why they matter: if you cannot tie your chosen DevRel metrics back to the company or product goal that leadership cares about. The numbers may still be interesting, but they will not be persuasive, and I think that is the trap that many teams fall into.

I also think it helps to separate signals that tell you whether the product is moving in the right direction from signals that tell you where the developer journey is breaking down. Daily active users, monthly active users, or engaged users can genuinely be useful if your company’s goal is product adoption. They can tell you whether you are broadly “heading north”, but they do not tell you why progress has stalled, or where developers are getting stuck. For that, you need something more diagnostic; you need the map as well as the compass.

The DevRel funnel as a strategic tool

With that compass and map analogy, I needed a way to identify what should happen next for a developer, and the developer funnel was the model that helped most. Caroline Lewko and James Parton describe a very similar five-stage model in DevRel.Agency’s Developer Journey map.

I’ve heard other variations too (like some people feeling “funnel” is a bit too linear and use “flywheel” instead). After all, developer journeys loop back, skip stages, and re-enter all the time. But either way, I find the journey framing useful because it forces me to ask what should happen next, and connect that back to the tactical work.

Unlike a traditional marketing funnel (which maps consumer psychology from awareness to purchase), a developer funnel maps the developer’s journey through stages of engagement with a technology platform. In the model I’ve been working with, these stages are:

  1. Discover: The developer first becomes aware that the product exists and may be relevant to their needs.
  2. Evaluate: The developer explores the product, reads documentation, tries a sandbox, runs a proof of concept, or asks peers about their experience.
  3. Learn: The developer builds proficiency, works through tutorials, attends workshops, learns from peers in community spaces, and sees developers sharing what worked while learning out in the open.
  4. Build: The developer creates something real with the product, whether that is a project, a prototype, or a production integration.
  5. Scale: The developer or their organisation deepens their investment, expands usage, and potentially becomes a contributor or advocate.

What I like about the funnel is that it gives you more than a progression. It gives you a way to connect a tactic to what the business is actually trying to move. And this was not just a framework I wanted to impose on the data afterwards, one participant described this approach:

I track everything. If we take a developer journey map (discover, evaluate, learn, build, and scale) […] If you tell the users what to do next, then you can track that discovery point […]

That stuck with me because it turns the funnel from an abstract model into a practical measurement tool. Another participant was more blunt about what shouldn’t count:

Website views, I think, are rubbish. They don’t do anything of value for me.

That sounds harsh, but I think the point stands. A metric is only useful if it tells you something about a developer’s movement through the journey, not just that somebody looked at something once.

If a company’s strategic goal is to increase the number of active developers building in production with their platform (a Build-stage metric), you can work backwards. What Learn-stage activities prepare developers to succeed at Build? What does successful learning look like, and how would you measure it? What Evaluate-stage activities and resources help developers decide to invest the time in learning? What brings developers into Evaluate in the first place?

Now you have a chain: your tutorial content (tactical metric: completion rate) serves the Learn stage (intermediate metric: time-to-first-success), which feeds into the Build stage (strategic metric: active builders, or whatever the business cares about). That is a story you can actually explain from end to end. Given that one of our primary jobs in Developer Relations is to come up with narratives and stories around products, it’s ironic that the overarching narrative is what we often miss in reporting.

That idea also shows up in the more practical measurement writing. Cliff Simpkins argues in Defining DevRel Success Measures that DevRel metrics have to reflect where DevRel sits across product, marketing, and engineering (instead of borrowing somebody else’s success measures and hoping they fit). I think that’ right’; it’s not just about tracking more, but tracking what the work influences.

That is also why I spent so much of the previous post on the DevRel feedback loop talking about representative feedback. If you don’t know which developers are moving from one stage to the next, or where they are getting stuck, you can’t tell a convincing story about why a tactic mattered, breaking the thread between tactics to strategy.

That’s where the funnel transitions from being a diagram to a useful framing for the conversation. It helps you decide what you are trying to move, for whom, and what evidence would count as progress.

Developer personas in the DevRel funnel

The funnel also becomes more useful when it’s paired with clear developer personas. Not every developer enters the funnel at the same stage or progresses through it in the same way:

  • An experienced developer building a new integration starts at Evaluate, not Discover.
  • A student working on a university project might spend a long time in Learn before ever reaching Build.
  • An enterprise developer may need to navigate procurement and security review as part of the Build-to-Scale transition.

Different personas have different journeys. And the funnel makes those differences actionable. If the developers you most need are enterprise teams trying to scale in production, but most of your content is still aimed at newcomers in Discover and Evaluate, you have a mismatch. The funnel surfaces that mismatch in a way that bucketed activity metrics will not.

That’s also why I think Caroline Lewko’s segmentation piece for Heavybit is useful here. Her argument is that “developers” is too broad a label to help you make good decisions. I agree, and we’ve covered this in my previous post on The four pillars of DevRel (and the foundation they rest on). If the persona work is vague, measuring the impact for that group will also be ambiguous.

Introducing RODI: return on developer investment

I wanted a way to talk about whether DevRel investment is paying off at each stage of the journey, which is when I came up with RODI: Return on Developer Investment.

The concept is borrowed from ROMI (Return on Marketing Investment) and applies similar logic: if you can estimate the cost of supporting a developer through each stage of the funnel, and you know the conversion rates between stages, you can begin to model the cost-effectiveness of different investments.

How much does it cost (on average) to bring a developer from Discover to Evaluate? From Evaluate to Learn? From Learn to Build? If you can estimate those figures, even roughly, even as directional indicators rather than precise calculations, you start to have a basis for prioritisation decisions that goes beyond “this content got the most views”.

To be clear though, RODI is an idea at this stage, not a finished method. To use it properly, you would need solid data on how developers move through the funnel, and the costs that go into the various activities that we do in DevRel. That is a hard problem to solve. But even so, I still think (along with the journey positioning) that these framings help us ask better questions.

The DevRel developer experience paradox

There’s one more part of the measurement challenge I keep coming back to: where Developer Experience sits in the model.

Most DevRel practitioners would describe Developer Experience as the goal. If you improve the developer experience, then adoption follows. But as we’ve discussed previously, I think the more useful way to handle DX is to treat it as a prerequisite for DevRel’s work and something DevRel can improve over time:

  • It is a prerequisite because developers will not adopt a product with poor DX. No amount of excellent advocacy, education, or community investment overcomes the barrier of a product that is genuinely difficult to use, poorly documented at the core, or unreliable. Good DX is a prerequisite for DevRel being effective. If the product experience is broken, DevRel cannot fix it through great content.
  • It’s also something DevRel can shape. DevRel’s activities, product feedback, advocate insights, community requests, and what teams learn from those interactions can all improve the product’s DX over time. Equally, teams that make sure those community patterns reach the product team, and help the product team understand where developers are struggling and are contributing to the next round of improvement.

This developer experience paradox matters for measurement. If you only measure DX as an output, you miss the extent to which DevRel also helps create the conditions for DX improvement. And if you treat DX purely as an input, you miss DevRel’s ongoing contribution to making it better.

It also matters for how DevRel leaders talk to product leadership. DevRel is not just outward evangelism. It is inward advocacy and product insight as well, something we have touched on in several pieces including the four pillars, value co-creation not marketing, devrel communities not brand communities and feedback loop posts (see how it’s all coming together?). Both of those contributions need to show up in how you measure and report, making the invisible work visible.

A practical starting point for DevRel measurement

I’m not going to pretend that this measurement problem has an easy solution. Several of the participants I spoke to have been working on it for years and haven’t fully cracked it. Attribution across a complex developer journey is hard, and the time horizon between DevRel investment and commercial outcomes can be long. For some organisations, the data infrastructure required isn’t in place (like in-product telemetry, tying user engagement across external platforms like YouTube, etc.).

But the starting point is available to every team today: build the funnel, define the personas, and create the explicit chain from tactical activities to funnel stages to strategic outcomes. You may not have perfect data, but you’ll likely have something directional. That is still better than precision without a story.

One of the clearer examples from the interviews came from a team thinking explicitly about activation rather than just volume:

We’re checking to ensure developers’ activation is high in areas we know they may struggle with that the company cares about. So, can they reach value within a certain amount of time?

That is the type of of question I’d love to see more DevRel measurement conversations around. Several participants described the “end state” in similar terms. One put it like this:

But we’ve never been able to track the straight line between if someone visits the product website, makes a business decision, downloads it, likes it and buys it. That’s the Holy Grail, though.

That is incredibly hard, and it may always be hard. But complexity is no excuse for avoiding the work. With that, I think the goal isn’t perfect attribution. Instead, it’s enough shared evidence that you can explain why a DevRel tactic exists, who it is serving, which stage it supports, and what strategic outcome it is meant to influence. That is a much better starting point than a dashboard of disconnected vanity metrics.

Are there certain metrics that have worked for your team? How have you connected them to your developer journey, and ultimately, to your company’s strategic goals? Drop a comment on the BlueSky thread below, or drop me a DM on LinkedIn for a more private conversation.

In my final post in this series (I’ve really enjoyed writing these up, so tentatively call this the “final” one as I might deepen the writeups in some areas), I’m going to look at a pattern that closely relates to this lack of clarity: randomisation. Why do DevRel teams get pulled in multiple directions, what is the root cause, and what would it take to stop it?

Until the next one, thanks for reading - and see you in the (potentially) last post of the series!

Bluesky Interactions

Loading Bluesky post...
Loading likes...

Comments

Loading comments...
tip: subscribe to get notified when new content is published
subscribe --rss (opens in new tab)

Related Content

How does Developer Relations (DevRel) create value? What 13 interviews revealed.

2026-03-11 · 7 min

The question of how Developer Relations (DevRel) teams create value has been answered plenty of times (though rarely through systematic research). In 2024 I looked into it properly through 13 interviews with DevRel leaders, culminating in an MBA dissertation at Warwick Business School, unearthing a couple of surprises along the way. This series works through what I found.

The feedback loop: how DevRel bridges community and product

2026-03-22 · 11 min

DevRel is often framed as the voice of the developer. My research suggests a broader job: gathering representative feedback, reducing friction, and showing developers what changed.

Why developer communities are not brand communities

2026-03-21 · 12 min

Academic research on brand communities can help DevRel, but only up to a point. The bigger lesson is where the model breaks: developer communities run on trust in the technology, not loyalty to the brand.