The NPS, no pun intended, has many detractors. There are a lot of articles written about it and one of the best ones in my opinion, is Jared Spool´s “Net Promoter Score considered harmful” (with an equally interesting counter opinion by Jeff Sauro).
But one of its biggest shortcomings, about which I haven’t read much, is the difference between how NPS interprets scores vs how users assign a score to their experience. And this gap makes the areas in charge of NPS waste resources fixing things that aint broken.
How NPS interprets scores
For the NPS scores 1–6 are bad, 7–8 neutral and only 9–10 good.
Here’s the definition straight from its creators:
- Promoters (score 9–10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
- Passives (score 7–8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
- Detractors (score 0–6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.
How users score their experience
Let’s compare the NPS evaluation with the user’s mental model. When presented with a 1–10 scale, users will intuitively consider the middle score neutral, everything above that positive and anything below that negative.
NPS vs Users
If we compare the way users score their experience vs the way NPS interprets that score we’ll see there’s some distorsion.
Now that we see both side by side we realize that it is not correct to consider that someone who gave us a 6, even a 5, is not satisfied and could potentially damage our brand. These are users with a neutral opinion, which is not necessarily a bad opinion.
A lot of products that serve a mostly functional purpose will receive a 5 or a 6 by the simple fact that they are fulfilling their purpose.
Considering 6 like a negative score and trigger customer retention actions for someone who is clearly not at risk of abandoning our product is a waste of resources.
Likewise, considering that someone who gave us a score of 7 or 8 (which are usually considered good or good-enough scores) has a neutral opinion about our product is also incorrect. They may not be as enthusiastic as someone who gave us a 9–10, but they are by no means neutral or indifferent.
It is not correct to consider that only people who scored 9–10 are the only ones who would recommend our product or brand. Since 6–8 are also positive nothing indicates that those users would be detractors.
Additional considerations
Because NPS is easy to understand and implement it has become the de-facto tool for measuring customer satisfaction in many organizations. According to its creators, “NPS is the only tool you will ever need to measure customer experience”. But the customer experience is the sum of many experiences across multiple channels.
Taking that into accound, it’s OK for my bank to send me an NPS after I used an ATM and interpret that score as a global indicator of my experience.
The problem is that most companies will interpret my NPS score as a specific score of the channel or functionality (yes, many companies send NPS to evaluate features!) that triggered the survey.
In other words, if I used the ATM and I didn’t have any issues with that but in general I am dissatisfied with my bank, I am very likely to give a poor score. So far so good because the experience is the sum of all experiences across all channels.
The problem is that since the NPS was sent after the use of the ATM, the company will interpret my score as dissatisfaction with the ATM, not with the bank in general.
Recommendation as indicator of satisfaction
Sometimes a really happy customer won’t be recommending your app or service. I may be deep in love with my bank’s app, but I can’t recommend it to someone who is not a client.
In addition, the app is part of the service so it’s not like I downloaded it because I wanted it badly. I got it because it is functional to my bank’s services. So in this case, measuring satisfaction is not the best metric, nor NPS the best tool.
In cases like this SEQ (Single Ease Question) which is about ease of use or CSAT (Customer Satisfaction Score) that focuses on the experience are more appropriate tools.
Final thoughts
My point is that we shouldn’t evaluate NPS responses with the technical definition of the tool, but rather, using the user’s mental models, since it is using this perspective that the score was given.
This gap between the technical definitions and mental models distorts the analysis, making the areas responsible for CX over react or even act unnecessarily. They can also create a sense of under achievement when in most cases it is not so.
This distortion makes CX people fix things that ain’t broken and moves the focus away from actual issues that have a real impact in the experience.