NPS Is Telling You Something. Most Brands Aren’t Listening

NPS scores are one of the most widely collected metrics in marketing, and one of the most widely ignored. Brands run quarterly surveys, celebrate a score of 42, and then file the results somewhere between the board deck and the recycling bin. The number becomes a KPI rather than a signal, and the gap between what customers are telling you and what the business actually does about it keeps widening.

Customer experience NPS news has been moving in an interesting direction over the past few years. Scores are under pressure across most sectors, survey fatigue is real, and the companies pulling ahead are the ones treating NPS as a diagnostic rather than a scorecard. That distinction matters more than most leadership teams want to admit.

Key Takeaways

  • NPS is most valuable as a diagnostic tool, not a performance metric. Tracking the score without acting on the verbatim feedback is a waste of the data.
  • Survey fatigue is suppressing response rates across sectors. Shorter, more contextual surveys tied to specific interactions outperform annual relationship surveys.
  • The brands improving NPS fastest are closing the loop with detractors, not just celebrating promoter scores.
  • NPS alone cannot tell you where in the customer experience the problem sits. It needs to be paired with experience-level data to be actionable.
  • AI is changing how brands analyse NPS verbatim at scale, but the quality of the action taken still depends on human judgment and organisational will.

There is a broader conversation happening in customer experience strategy right now, and NPS sits at the centre of it. If you want the full context on how experience strategy is evolving, the Customer Experience hub covers the landscape from measurement through to technology and retention.

Why NPS Scores Are Declining Across Most Sectors

The short answer is that customer expectations have risen faster than most businesses have been able to keep up with. The longer answer involves a combination of economic pressure, post-pandemic service degradation, and a structural problem with how most organisations are built to respond to feedback.

I spent a number of years running agencies where client NPS was a live metric on the dashboard. What I noticed was that scores would hold reasonably well during stable periods, then crater the moment something went wrong and the client felt they weren’t being heard. The score itself was almost never the issue. The issue was the response time between a client flagging a problem and someone in the business actually doing something about it. That lag is where trust dies.

The same dynamic plays out at scale in B2C. Customers are not necessarily expecting perfection. They are expecting responsiveness. When a brand collects NPS data and then does nothing visible with it, the next survey invitation feels like an insult. Response rates drop, the data gets noisier, and the score becomes less reliable as a signal of anything meaningful.

Understanding the three dimensions of customer experience helps clarify why NPS is a lagging indicator. By the time a score drops, the failure has usually already happened across multiple touchpoints. The survey is just the moment a customer finally has a formal channel to express what they have been feeling for weeks.

The Difference Between Tracking NPS and Acting on It

Most organisations track NPS. Far fewer have a functioning closed-loop process. The distinction is significant, and it shows up in the data over time.

Closed-loop NPS means that when a detractor submits a low score, someone from the business contacts them within a defined window, acknowledges the issue, and follows up on what was done about it. It sounds simple. In practice, it requires cross-functional coordination between CX, operations, customer service, and sometimes product, all of which have competing priorities and separate reporting lines.

When I was working on a turnaround for a loss-making agency, one of the first things we did was implement a basic version of this. We contacted every client who had given us a score below seven, within 48 hours, with a specific agenda: not to defend ourselves, but to understand what had gone wrong. Some of those conversations were uncomfortable. Several of them saved accounts that were on the verge of leaving. A few of them surfaced operational issues that we had been papering over with account management charm rather than actually fixing.

That is the real value of NPS when it is used properly. It is not a vanity metric for the board pack. It is an early warning system, and it only works if someone is watching the dashboard and empowered to act on what they see. Building a customer experience dashboard that surfaces NPS alongside operational data is a practical starting point for teams that want to make the feedback loop functional rather than ceremonial.

Survey Fatigue Is a Real Problem, and Most Brands Are Making It Worse

There is a certain irony in the fact that the more a brand cares about customer experience, the more surveys it tends to send, and the more it contributes to the fatigue that makes those surveys less useful.

Post-purchase NPS. Post-service NPS. Quarterly relationship NPS. Annual brand tracking. It adds up quickly, and customers are increasingly selective about which surveys they complete. The ones who do respond are often either very happy or very unhappy, which skews the data in ways that can mislead leadership teams into thinking things are better or worse than they actually are.

The brands handling this well are moving toward transactional NPS that is tightly contextualised. A single question, triggered immediately after a specific interaction, with a follow-up text field that is short and focused. This approach generates higher response rates and more actionable verbatim because the customer is responding to something specific that just happened, rather than trying to average their feelings across six months of interactions.

It is also worth thinking about channel. Email surveys have declining open rates across most demographics. In-app and SMS surveys, delivered in the right context, tend to perform better. For brands operating across multiple touchpoints, the channel mix for NPS collection should reflect where customers are actually engaged, not where it is easiest to send a survey. The difference between integrated marketing and omnichannel marketing is relevant here: a genuinely omnichannel approach to NPS collection means meeting customers in the right channel at the right moment, not blasting the same survey through every available pipe.

What NPS Verbatim Is Actually Telling You

The score is almost secondary. The verbatim is where the intelligence lives.

I have sat through enough NPS readouts to know that most of the conversation in the room focuses on whether the number went up or down, and very little time is spent on what customers actually wrote. That is backwards. A score moving from 38 to 41 might be statistical noise. A cluster of verbatim comments all mentioning the same friction point in the checkout process is a specific, actionable finding that someone should be taking to the product team that afternoon.

The challenge has always been scale. If you are running a high-volume consumer business, you might be collecting thousands of open-text responses per month. Manual analysis is not realistic. This is one area where AI tools are genuinely useful, not as a replacement for human judgment, but as a way of surfacing themes and anomalies in verbatim data that would otherwise be buried in a spreadsheet nobody reads.

The question of how much autonomy to give those AI tools is worth thinking through carefully. The distinction between governed AI and autonomous AI in customer experience software matters when you are deciding how much of the analysis and response process to automate versus keep under human oversight. For NPS specifically, I would argue the analysis can be heavily automated, but the response, particularly to detractors, should stay human.

Analysing customer experience analytics at the verbatim level also tends to reveal things that quantitative metrics miss entirely. Customers often describe the emotional texture of an interaction, not just the functional outcome. They will tell you they felt dismissed, or that the process felt designed for the company’s convenience rather than theirs. That kind of signal does not show up in CSAT scores. It shows up in text.

Where NPS Fits in the Broader Customer experience

NPS is a relationship metric. It measures how a customer feels about a brand at a point in time, shaped by the cumulative weight of every interaction they have had. That means a single bad experience in a long positive relationship might not move the score much. And a single outstanding experience after a string of mediocre ones might not either. Context matters.

This is why NPS needs to be mapped against experience stage to be truly useful. A detractor score from a new customer who just had a difficult onboarding experience is a very different problem from a detractor score from a long-term customer who has started to feel taken for granted. The remedies are different. The urgency is different. The business risk is different.

For brands in food and beverage, where purchase frequency is high and the relationship is built across many small moments, this mapping is particularly important. The food and beverage customer experience has distinct inflection points where NPS data is most predictive of future behaviour, and collecting the score at the wrong moment in that experience can give you a misleading read on customer health.

Across retail more broadly, the relationship between NPS and actual purchasing behaviour has become more complex as the number of touchpoints has multiplied. A customer might rate you a nine on a post-purchase survey and then quietly switch to a competitor three months later because the loyalty programme stopped feeling worth it. NPS is a snapshot. Retention data tells you what actually happened. The two need to be read together.

The Honest Conversation About NPS Benchmarks

Industry benchmarks for NPS are published regularly, and they are worth treating with a degree of scepticism. Not because the data is fabricated, but because the methodology varies significantly between sources, the sample composition matters enormously, and a score that looks strong against one benchmark might look average against another.

More practically: your competitors’ NPS scores are not your target. Your customers’ expectations are your target. I have worked with clients who were proud of an NPS that was above the sector average while simultaneously losing market share, because the sector average was low and customers were choosing to stay out of inertia rather than genuine preference. That is not a healthy position.

The more useful benchmark is your own trend line over time, segmented by customer cohort, channel, and product line. A score that is improving in your highest-value customer segment is more meaningful than an aggregate score that is flat because gains in one area are being offset by declines in another.

Forrester has written about the relationship between customer experience investment and business outcomes for years, and the link between CX performance and revenue growth is well-established in their research. The point is not that NPS drives revenue directly. The point is that the behaviours NPS is trying to measure, loyalty, advocacy, willingness to repurchase, are the ones that compound over time into durable commercial advantage.

NPS and Customer Success: The Retention Connection

In B2B and subscription businesses, NPS has a particularly direct relationship with retention. A detractor in a B2B account is not just a dissatisfied individual. They are a potential churn risk, a potential negative reference, and sometimes a person who will move to a competitor organisation and take their buying preferences with them.

This is why the best customer success teams treat NPS as a live input into their account health scoring, not a quarterly report. When a key contact at an account drops from a seven to a four, that should trigger a conversation, not a wait until the next business review.

The infrastructure for making this work sits at the intersection of CX measurement and customer success operations. Customer success enablement is the discipline that connects the measurement to the motion: the playbooks, the tooling, the escalation paths, and the cross-functional coordination that turns an NPS signal into a retention action before the customer has already decided to leave.

I have seen this done well and I have seen it done badly. Done badly, it looks like a CSM sending a templated email saying “we noticed your recent score and wanted to check in.” Done well, it looks like a senior person from the business calling within 24 hours, having read the verbatim, with a specific proposal for addressing the issue that was raised. The difference in outcome between those two responses is not marginal.

What Good NPS Practice Looks Like in Retail and Omnichannel Contexts

Retail presents a specific challenge for NPS because the customer relationship spans multiple channels, and a negative experience in one channel can colour the overall relationship score even when other channels are performing well. A customer who loves your in-store experience but finds your app frustrating might give you a six, and without channel-level attribution in your NPS data, you will not know where to focus.

The brands getting this right are collecting NPS at the channel level and aggregating up, rather than collecting a single relationship score and trying to disaggregate it. This requires more survey infrastructure, but the payoff in diagnostic clarity is significant. You stop asking “why is our NPS 34?” and start asking “why is our in-app NPS 22 while our in-store NPS is 51?” Those are very different questions with very different answers.

For retail media specifically, where the customer experience spans owned channels, third-party marketplaces, and media environments that the brand does not fully control, the measurement complexity increases further. Omnichannel strategies for retail media need to account for the fact that a customer’s experience of a brand is being shaped by touchpoints the brand may have limited visibility into, and NPS collected after a retail media-driven purchase may reflect the marketplace experience as much as the brand experience.

Video is also becoming a more significant part of how brands respond to NPS feedback at scale. Rather than a generic email response to detractors, some brands are experimenting with personalised video messages that acknowledge the specific feedback and explain what has been done about it. Video in customer experience is not a gimmick when it is used in a genuinely personalised way. It changes the emotional register of the response and signals that a real person engaged with what the customer said.

The Structural Problem NPS Cannot Solve

I want to be honest about something that tends to get glossed over in customer experience discussions. NPS is a measurement tool. It tells you how customers feel. It does not tell you whether your leadership team is willing to make the changes that would improve how customers feel. And in my experience, that is often where the real problem sits.

I have worked with businesses that had excellent NPS programmes, sophisticated verbatim analysis, well-designed closed-loop processes, and scores that were still declining year on year. The measurement was fine. The problem was that the findings kept pointing to issues that were structurally difficult or commercially inconvenient to fix, and the organisation kept finding reasons to prioritise other things.

Marketing is sometimes used as a blunt instrument to prop up businesses with more fundamental issues. You can spend on acquisition, on loyalty mechanics, on brand campaigns, and temporarily move NPS in the right direction. But if the product is genuinely inferior, or the service model is designed around operational efficiency rather than customer outcomes, or the pricing strategy treats loyal customers worse than new ones, NPS will keep telling you the same thing until someone is willing to hear it.

The companies that genuinely improve NPS over time are the ones where leadership treats the score as a reflection of business quality, not a marketing metric. That is a cultural position, not a technical one. No amount of survey optimisation or AI-powered verbatim analysis will substitute for an organisation that is genuinely oriented around doing right by its customers.

If you are thinking about how NPS fits into a broader CX strategy, the Customer Experience hub brings together the measurement, technology, and strategic frameworks that make the difference between tracking a number and actually improving the experience behind it.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a good NPS score for customer experience?
There is no universal answer. NPS benchmarks vary significantly by industry, methodology, and sample composition. A score that looks strong in one sector may be average in another. More useful than chasing an industry benchmark is tracking your own trend line over time, segmented by customer cohort and channel, and understanding whether your highest-value customers are becoming more or less likely to recommend you.
How often should you run an NPS survey?
For most businesses, a combination of transactional NPS (triggered after specific interactions) and periodic relationship NPS (quarterly or biannual) works better than annual surveys alone. Transactional NPS gives you faster signals and higher response rates because the feedback is contextually relevant. Relationship NPS gives you a broader read on overall sentiment. Avoid surveying the same customers too frequently, as this contributes to survey fatigue and degrades response quality.
What is closed-loop NPS and why does it matter?
Closed-loop NPS is the process of following up with customers who have submitted a survey response, particularly detractors, to acknowledge their feedback and take visible action on it. It matters because collecting NPS without acting on it erodes trust over time. Customers who see their feedback acknowledged and acted upon are more likely to respond to future surveys and more likely to update their view of the brand. Closed-loop processes require cross-functional coordination and clear ownership to work effectively.
Can AI improve how companies use NPS data?
AI is genuinely useful for analysing NPS verbatim at scale, surfacing themes, flagging anomalies, and prioritising which feedback warrants immediate attention. For high-volume consumer businesses collecting thousands of open-text responses per month, manual analysis is not realistic. However, the quality of the action taken on that analysis still depends on human judgment and organisational will. AI can tell you what customers are saying. It cannot make your business willing to act on it.
Why is NPS declining across many industries?
Several factors are contributing. Customer expectations have risen, particularly around responsiveness and personalisation. Post-pandemic service quality degraded in many sectors and has not fully recovered. Survey fatigue is suppressing response rates and skewing samples toward extreme respondents. And many organisations collect NPS without acting on it, which compounds dissatisfaction among customers who feel their feedback is ignored. The businesses holding or improving NPS are typically the ones with functional closed-loop processes and genuine organisational commitment to acting on what customers say.

Similar Posts