Menu

Measuring platform satisfaction: The 3 most helpful techniques

Platform Engineering is often motivated by the idea that improving developer experience results in good outcomes for developers, customers, and the organization. When developers can deliver a smooth stream of helpful features, everyone benefits.

Even in cases where a platform is primarily motivated by compliance, standardization, or infrastructure cost control, developer satisfaction is still crucial. If developers face constant friction with the platform, they are more likely to look for ways around it. That eventually damages the alignment benefits.

Even a mandatory platform isn’t immune from developer satisfaction. If policies could make people use the tools the organization selected, shadow IT wouldn’t exist.

That makes measuring satisfaction vital, and that’s why it’s included in the MONK metrics framework for Platform Engineering. While MONK metrics suggest using net promoter score (NPS), other options are available to platform teams, and the most powerful way to measure developer satisfaction is multi-modal.

In this post, we’ll look at 3 techniques for measuring customer satisfaction you can use to assess developer satisfaction with your platform.

  • Net promoter score (NPS)
  • Customer satisfaction score (CSAT)
  • Customer effort score (CES)

Net promoter score (NPS): Measuring customer loyalty

You can measure NPS with a single question, collecting a rating from 0 (not at all likely) to 10 (extremely likely).

How likely are you to recommend the platform to other developers?

It’s helpful to provide an optional follow-up question to ask why the respondent chose their score. Letting them explain it lets you understand where friction or sharp edges could be removed to improve satisfaction with your platform.

How NPS works

You calculate NPS by grouping responses into one of 3 groups. Only two are used to calculate the score.

ScoreGroup
9-10Promoters
7-8Passives
0-6Detractors

Promoters are loyal enthusiasts and super fans who will use your platform and tell other developers about it. They will help other developers understand what the platform can do and show them how to use it.

Passives are happy enough to use the platform but could be tempted to use something else if it’s easier. They are less likely to spread the word or help other developers use it.

Passives are happy enough to use the platform, but they could be tempted to use something else if it’s easier. They are less likely to spread the word or help other developers use it.

Detractors don’t like the platform. It’s in their way and it slows them down. As we all do when we’re unhappy with a product or service, they’ll share their dissatisfaction with many other developers, and they are just as convincing as the promoters.

You calculate the net promoter score by subtracting the percentage of detractors from the percentage of promoters. Don’t include the passives in this calculation. The score can range from -100 to +100.

(% of promoters) - (% of detractors) = NPS

If 50% of platform users were promoters, and 20% were detractors, your score would be 30.

The table below shows what different scores mean.

ScoreClassification
Less than 0Bad
0-19Okay
20-49Good
50-69Excellent
70+World-class

When to use NPS

NPS is useful for understanding overall platform satisfaction. By tracking trends, you can detect when the rust sets in, such as when underlying tools have added new features your platform users can’t access through your platform.

The score alone doesn’t tell you how to respond to a change in sentiment, so adding a follow-up question to collect a reason for the score and maintaining regular contact with developers who use your platform are vital.

Customer satisfaction score (CSAT): Direct satisfaction measurement

Customer satisfaction (CSAT) scores are a simple, direct approach you can apply to your whole platform or specific platform features. You collect a rating from 1 (extremely dissatisfied) to 5 (extremely satisfied) based on a question such as: “How satisfied are you with the platform?”

You can adjust the wording to collect feedback on specific features, which lets you dig deeper into what areas need improvement. However, you shouldn’t annoy platform users with constant pop-ups and survey emails, as you’ll start lowering overall satisfaction.

How CSAT works

You calculate the CSAT score by taking the percentage of platform users with a positive rating (a 4 or a 5). This means scores will be between 0 and 100.

((number of 4s and 5s) / total responses) x 100 = CSAT score

If 4 out of 10 developers gave you a 4 or a 5, your CSAT score would be 40%.

Some organizations that use CSAT calculate an average across all ratings, but this is a composite customer satisfaction score, not a CSAT score.

When to use CSAT

CSAT is helpful when you want the ability to create comparable scores across different platform components and obtain an overall satisfaction score.

CSAT excels at measuring satisfaction with specific interactions as well as the whole platform. The metric provides immediate, actionable feedback but may not predict long-term loyalty as effectively as NPS or CES.

Customer effort score (CES): Measuring experience ease

Customer effort scores focus on how easy it was for the developer to achieve their goal. There’s evidence that this is a better indicator of customer loyalty than measures of delight. Like CSAT, you collect a rating scale from 1 (very easy) to 5 (very difficult) to a question such as: “How easy was it to set up a new project?”

It’s worth noting this scale is about effort, so lower ratings are better. As the developer making the rating is thinking about more or less effort, it makes sense for the rating to represent the effort in this way.

How CES works

Customer effort score is the percentage of respondents who rate the experience as easy (a rating or 1 or 2).

((number of 1s and 2s) / total responses) x 100 = CES

If 6 out of 10 developers rate the ease as 1 or 2, your score is 60%.

When to use CES

In Platform Engineering, customer effort scores help surface platform friction. Maintaining strong scores will help make sure your platform is a force multiplier, not a bottleneck. You should collect CES on task completion so it reflects how easily and successfully developers can get work done using the platform.

A quick comparison of NPS, CSAT, and CES

Here’s a summary of the 3 approaches that highlights their strengths and differences.

ApproachScoresApplies to
NPS-100 to +100Emotional loyalty
CSAT0 to 100Platform or component satisfaction
CES0 to 100Task completion and friction

The power of integration: Why multiple approaches matter

While each satisfaction metric provides a view into developer satisfaction, the most effective approach is to combine them to get a better understanding of the experience developers have with your platform.

Each approach has a different angle of view. NPS captures the emotional loyalty, CSAT measures satisfaction with specific platform areas, and CES identifies friction. Crucially, they provide signals you can use to delve deeper to understand what could be improved and where you’ll get the most impact from changes.

Though a multi-approach system is best, especially as part of a framework for measuring Platform Engineering such as MONK metrics, you should build the picture in small steps. An iterative and incremental approach works for measurement building just as it works for product development.

You also have to consider the frequency and timing of developer surveys, even when they have a single question. The developer must have a complete experience to report on before you ask for a score, and interrupting a task to ask for feedback generally lowers the overall experience as it frustrates developers who are in the middle of a complex task.

While it’s desirable to check back in with developers to capture the trends in their opinions on the platform and its components, you must balance this against the irritation of being asked for feedback too frequently. The bad timing of pop-up surveys often skews NPS, CSAT, and CES scores.

One thing’s for sure, though. The worst thing you could do is measure nothing.

More reading

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Categories:

Next article
DevEx metrics