Net Advocacy Score: The Growth Metric Most Teams Ignore
Net Advocacy Score measures the proportion of your customers who actively recommend your brand to others, minus those who actively discourage it. Unlike Net Promoter Score, which asks about intent, Net Advocacy Score tracks behaviour: who is actually talking, and what are they saying. That distinction matters more than most measurement frameworks acknowledge.
Most brands have a reasonable idea of their NPS. Far fewer have any honest read on how many of their customers are out in the world doing their selling for them, and how many are quietly doing the opposite. Net Advocacy Score closes that gap, and in doing so, it connects word-of-mouth to commercial outcomes in a way that intent-based metrics rarely do.
Key Takeaways
- Net Advocacy Score measures actual advocacy behaviour, not stated intent, making it a more commercially honest metric than NPS alone.
- Advocacy is not a loyalty metric. A satisfied customer and an active advocate are not the same thing, and conflating them leads to bad strategic decisions.
- The brands with the strongest advocacy scores tend to have earned them through product and experience quality, not through referral incentives or review prompts.
- Negative advocacy, customers who actively warn others away, can offset years of acquisition spend. Most brands underestimate its scale.
- Net Advocacy Score is most useful when tracked over time and segmented by cohort, not treated as a single headline number.
In This Article
- Why Most Brands Are Measuring the Wrong Thing
- How Net Advocacy Score Is Calculated
- The Difference Between Loyalty, Satisfaction, and Advocacy
- What Drives Advocacy and What Does Not
- Negative Advocacy: The Number Nobody Wants to Look At
- How to Use Net Advocacy Score in a Growth Strategy
- Net Advocacy Score vs. Net Promoter Score: Which One Should You Use?
- Building an Advocacy Measurement Programme
- The Honest Limitations of the Metric
- What Good Looks Like in Practice
Why Most Brands Are Measuring the Wrong Thing
I spent years sitting in rooms where NPS was treated as the definitive measure of customer health. Boards loved it. It was a single number, it moved in ways you could track quarter to quarter, and it gave everyone something to point at. The problem is that NPS measures what people say they might do. It does not measure what they actually do.
When I was running an agency and we were pitching growth strategy to clients, the conversation would almost always surface around acquisition. How do we get more customers? What channels are working? Where should we increase spend? Rarely did anyone ask: how many customers are we losing to negative word-of-mouth, and how many new customers are we gaining because existing ones are recommending us? Those two questions are arguably more important than anything happening in paid media, and most brands have no reliable answer to either.
This is where Net Advocacy Score becomes useful. It is not a replacement for NPS. It is a different measurement entirely, one that sits closer to actual commercial outcomes. If your NPS is strong but your advocacy score is weak, you have a population of satisfied customers who are not talking. That is a missed growth opportunity. If your NPS is moderate but your advocacy score is negative, you have a structural problem that no amount of acquisition spend will fix.
The broader question of why go-to-market strategies stall, despite strong product and reasonable awareness, is one I explore in more depth across the Go-To-Market and Growth Strategy hub. Advocacy sits at the intersection of customer experience and commercial performance, and it is one of the most underused levers in a growth strategy.
How Net Advocacy Score Is Calculated
The calculation itself is straightforward. You survey a representative sample of your customers and ask a single behavioural question: in the past three months, have you recommended this brand to someone, discouraged someone from using it, or neither? You then calculate the percentage of active advocates minus the percentage of active detractors. The result is your Net Advocacy Score.
Some versions of the framework add a second layer, asking those who said they recommended the brand how many people they recommended it to, and how often. This gives you an advocacy volume figure rather than just a penetration rate, which is more useful when you are trying to model the downstream acquisition impact of word-of-mouth.
What the score does not capture on its own is why people are advocating or detracting. That requires a follow-up question, and it is the most commercially valuable part of the exercise. The brands that use Net Advocacy Score well do not just track the number. They segment it by customer cohort, by product line, by acquisition channel, and by tenure. A customer who has been with you for six months and is actively recommending you is a different signal than one who has been with you for four years and is only now starting to talk. Both matter, but for different reasons.
There is no universally agreed benchmark for what constitutes a strong Net Advocacy Score, and anyone who gives you a precise industry average without citing a specific, verifiable source is probably extrapolating. What you can do is establish your own baseline, track it consistently, and compare it against your own historical trend. That is more useful than chasing an external number that may not reflect your category, your price point, or your customer base.
The Difference Between Loyalty, Satisfaction, and Advocacy
One of the most persistent confusions in customer measurement is treating loyalty, satisfaction, and advocacy as if they are points on the same spectrum. They are not. They are distinct states, and a customer can be high on one and low on the others.
Satisfaction is a measure of whether a product or service met expectations. It is largely transactional. A customer can be satisfied and still switch to a competitor the moment a better price appears. Satisfaction does not generate word-of-mouth at any meaningful scale. It generates inertia.
Loyalty is a measure of repeat behaviour. A loyal customer comes back. But loyalty can be driven by switching costs, habit, or lack of alternatives rather than genuine preference. A customer who stays because changing is inconvenient is not the same as one who stays because they believe you are the best option. The former will not advocate. The latter might.
Advocacy is something else entirely. It requires a customer to expend social capital on your behalf. When someone recommends a brand to a friend or colleague, they are putting their own credibility on the line. That is a fundamentally different act than renewing a subscription or giving a product a four-star rating. It is the highest form of commercial endorsement, and it is earned, not engineered.
I remember a client conversation early in my agency career where the marketing director was very proud of their CSAT scores. They were consistently above 80%. What they did not know was that their advocacy rate, the proportion of customers who had actually recommended them to anyone in the past six months, was somewhere around 12%. Satisfied customers, mostly silent. The gap between those two numbers was the size of their growth problem.
What Drives Advocacy and What Does Not
This is where the measurement gets interesting, because the drivers of advocacy are not what most marketing teams spend their time optimising for.
Product quality and reliability are consistently the strongest predictors of genuine advocacy. Not brand advertising. Not loyalty programmes. Not referral incentives. The brands with the strongest organic advocacy scores tend to have built something that genuinely works better than the alternative, or that creates an experience distinctive enough to be worth talking about. That is a product and operations story as much as it is a marketing one.
Customer service is the second major driver, specifically the handling of problems. A customer whose complaint is resolved quickly and fairly is statistically more likely to advocate than one who never had a problem at all. This counterintuitive finding has been replicated across enough categories that it is worth taking seriously. How you handle failure tells customers more about your brand than how you perform when everything goes right.
Referral incentives are more complicated. They can increase the volume of referrals in the short term, but they tend to attract a different type of referrer, one who is motivated by the reward rather than by genuine enthusiasm. The quality of customers acquired through incentivised referrals is often lower than those acquired through organic advocacy. More importantly, incentivised referrals inflate your Net Advocacy Score in ways that make it harder to read as a genuine signal of brand health. If you are running a referral programme, track incentivised and organic advocacy separately.
There is a useful parallel here with how I used to think about lower-funnel performance marketing. For a long time, I overvalued it. The attribution looked clean, the numbers were convincing, and the ROI case was easy to make. It took me longer than I would like to admit to recognise that a significant portion of what performance was being credited for was going to happen regardless. The customer was already in the market, already predisposed to buy. We were capturing intent, not creating it. Advocacy works the other way. It reaches people before they are in the market, and it creates the predisposition that makes everything else downstream more efficient. That is a fundamentally different, and more valuable, commercial mechanism.
The challenge of making go-to-market feel less like a grind is partly a distribution problem and partly a trust problem. Advocacy addresses both. It puts your message in front of new audiences through channels that carry inherent credibility, and it does so at a cost that paid media cannot match at scale.
Negative Advocacy: The Number Nobody Wants to Look At
Most brands focus on the positive side of their advocacy score. Fewer spend meaningful time on the detractor figure, which is a mistake, because negative advocacy is asymmetrically damaging.
A customer who actively warns others away from a brand does not just represent lost revenue from their own lapsed relationship. They actively suppress acquisition from within their network. The reach of a single vocal detractor is difficult to quantify precisely, but it is almost certainly larger than the reach of a single advocate, partly because negative experiences tend to generate stronger emotional responses, and partly because warnings carry a social utility that recommendations sometimes do not. Telling a friend to avoid something feels like a public service. Recommending something feels like a personal preference.
When I was working on a turnaround for a business that had been haemorrhaging customers for two years, the instinct from the leadership team was to focus on acquisition. Get more people in the door. The problem was that the detractor rate was so high that new customers were arriving, having a poor experience, and then actively telling others. The acquisition spend was essentially funding a negative advocacy engine. We had to fix the product and service experience first, reduce the detractor rate, and only then did it make sense to invest in growth. Pouring acquisition budget into a business with a structural advocacy problem is one of the more reliable ways to accelerate failure.
The Forrester intelligent growth model makes a related point about the relationship between customer experience quality and sustainable growth. Brands that grow by acquiring customers they cannot retain, or that generate negative post-purchase experiences, are not building equity. They are running a leaky bucket, and the leak gets worse as the brand scales.
How to Use Net Advocacy Score in a Growth Strategy
The metric is only useful if it informs decisions. Here is how I have seen it used well, and how I have seen it wasted.
The brands that use it well treat it as a diagnostic, not a dashboard vanity metric. They segment the score by customer cohort and ask: which segments are advocating most, and what do they have in common? Which segments are detracting, and what experience or product failure is driving that? The answers to those questions tell you where to invest in product improvement, where to focus customer success resources, and which customer profiles are worth acquiring more of because they generate downstream value beyond their own revenue.
They also track it over time rather than treating it as a point-in-time measurement. A Net Advocacy Score that is improving quarter on quarter, even if it is not yet at a level you would be proud of, tells a different story than one that is declining from a high base. The direction of travel matters as much as the absolute number.
The brands that waste it tend to report it upwards as a single headline figure, celebrate when it goes up, and move on. No segmentation, no follow-up qualitative work, no connection to specific operational or product decisions. It becomes another number on a slide rather than a lens through which you understand your customers.
One practical application that I have found genuinely useful is connecting advocacy data to acquisition channel analysis. If you can identify which acquisition channels are producing your highest-advocacy customers, you can make a much stronger case for shifting budget toward those channels, even if their immediate cost-per-acquisition looks less efficient. A customer acquired at 1.4x the cost who then generates two organic referrals has a very different commercial profile than one acquired cheaply who never talks about you. Most attribution models miss this entirely because they are built to measure the first transaction, not the downstream value of the relationship.
Creator-led acquisition is one area where this connection between channel and advocacy quality is particularly interesting. Brands using creators in their go-to-market strategy often find that the customers acquired through creator content have higher advocacy rates than those acquired through traditional paid channels, partly because the trust transfer from creator to audience creates a stronger initial relationship with the brand. That is worth measuring deliberately rather than assuming.
The growth loop model that has gained traction in product-led growth circles is essentially a structural argument for building advocacy into the product experience itself, so that existing users naturally pull in new ones. Net Advocacy Score is the measurement layer that tells you whether that loop is actually functioning, and at what rate.
Net Advocacy Score vs. Net Promoter Score: Which One Should You Use?
This question comes up often, and the honest answer is that they measure different things and the choice depends on what decision you are trying to inform.
NPS is a measure of intent. It asks customers how likely they are to recommend you on a scale of zero to ten. It is widely used, easy to benchmark against industry averages, and has the advantage of a large body of comparative data built up over two decades of adoption. Its weakness is that intent and behaviour diverge significantly in most categories. Customers who say they are highly likely to recommend a brand often do not, either because the right moment never arises or because the intent was not as strong as the score suggested.
Net Advocacy Score measures what has actually happened. It is behaviourally grounded, which makes it a more honest signal of commercial impact. Its weakness is that it is harder to benchmark externally because it is less standardised, and it requires a slightly more careful survey design to avoid leading questions that inflate the positive response.
If I were advising a brand on which to prioritise, I would say this: use NPS if you need external benchmarking and board-level reporting in a format that stakeholders will recognise. Use Net Advocacy Score if you want to understand the actual commercial impact of your customer relationships and make better growth strategy decisions. Ideally, use both, but treat them as answering different questions rather than as interchangeable measures of the same thing.
There is a version of this debate that surfaces in almost every measurement conversation I have been part of: the tension between what is easy to report and what is actually useful. NPS won the adoption battle partly because it is simple and partly because Fred Reichheld and Bain gave it a compelling narrative. Net Advocacy Score has a more direct line to commercial outcomes, but it requires more work to implement well and more confidence to defend in a room full of people who have been reporting NPS for years. That should not put you off it.
Building an Advocacy Measurement Programme
If you are starting from scratch, the first thing to do is resist the temptation to build something complicated. The most common reason advocacy measurement programmes fail is that they are over-engineered before they have produced a single useful insight.
Start with a simple survey sent to a representative sample of your active customer base. The core question is behavioural: in the past three months, have you recommended this brand to anyone, discouraged anyone from using it, or neither? Add a follow-up asking why, and if they said they recommended, ask roughly how many people and in what context. That is enough to generate a baseline score and the first layer of qualitative insight.
Run it quarterly rather than continuously in the early stages. Continuous measurement creates noise, and you need enough time between surveys to see whether operational or product changes have actually moved the needle. Quarterly gives you a trend line within a year and enough data to start making decisions.
Segment from the start. At minimum, cut the data by customer tenure, acquisition channel, and product or service type. If you have the data, add geography and customer value tier. The aggregate score is a starting point. The segmented scores are where the strategic insight lives.
Connect the data to your CRM so that you can identify individual advocates and detractors, not just aggregate percentages. Advocates are worth investing in. They may be candidates for case studies, community programmes, or early access to new products. Detractors are worth understanding personally, not to manage them, but to understand what went wrong and whether it is a systemic issue or an isolated one.
The BCG perspective on go-to-market strategy makes the point that brand and commercial strategy need to be more tightly integrated than most organisations manage. Advocacy measurement is one of the practical mechanisms through which that integration can happen: it connects the brand experience to commercial outcomes in a way that both marketing and finance can engage with.
For teams looking to build out a broader growth measurement toolkit, the range of growth tools available has expanded considerably, but the discipline of knowing what you are measuring and why has not changed. Net Advocacy Score is most useful when it is part of a coherent measurement framework, not a standalone metric that gets reported in isolation.
The Honest Limitations of the Metric
No metric is a complete picture of reality, and Net Advocacy Score is no exception. It is worth being clear about what it does not tell you.
It does not tell you about passive word-of-mouth: the customer who mentions your brand positively in passing without it being a deliberate recommendation. In some categories, particularly those with high social visibility, this passive advocacy may be more commercially significant than active referrals. The metric does not capture it.
It does not tell you about the quality of the advocacy. A customer who recommends you to three close friends in a relevant context is generating more commercial value than one who mentions you once in a social media post to an audience that has no interest in your category. Volume and relevance are different things, and the basic Net Advocacy Score does not distinguish between them.
It is also subject to survey bias. Customers who respond to surveys are not a random sample of your customer base. They tend to be either more satisfied or more dissatisfied than the average, and response rates vary by customer segment in ways that can skew the results. This does not make the metric useless, but it means you should treat it as a directional indicator rather than a precise measurement.
I spent enough time judging the Effie Awards to have a deep respect for the gap between what metrics claim to measure and what they actually capture. The Effies require entrants to demonstrate a causal link between marketing activity and business outcomes, and the quality of that reasoning varies enormously. Net Advocacy Score, like any metric, is only as useful as the quality of thinking applied to it. A number without a question behind it is just noise.
The broader challenge of building a growth strategy that is commercially honest rather than just analytically tidy is something I return to regularly across the growth strategy content on this site. Net Advocacy Score fits into that framework as one lens among several, not as a silver bullet that replaces the need for judgement.
What Good Looks Like in Practice
The brands that do advocacy measurement well share a few common characteristics, and none of them are particularly glamorous.
They are consistent. They run the same survey, with the same question wording, to a comparable sample, on the same cadence. This sounds obvious, but it is surprisingly rare. Changing the question wording mid-programme, or shifting from a random sample to a self-selected one, destroys the trend line and makes the data useless for tracking progress.
They close the loop. When the data identifies a pattern, whether it is a product feature that is driving detraction or a specific customer experience moment that is generating advocacy, they act on it and then measure whether the action changed the score. This is what turns a measurement programme into a management tool.
They share the data widely. Advocacy is not a marketing metric. It is a business metric. The insights from a well-run advocacy measurement programme are as relevant to product, operations, and customer service as they are to marketing. The brands that treat it as a marketing dashboard miss most of its value.
And they are honest about what the number means. A Net Advocacy Score of plus 15 in a category where the average is plus 30 is not a success story dressed up in positive framing. It is a signal that something is structurally underperforming, and the honest response is to understand why rather than to celebrate that the number is positive at all.
That kind of commercial honesty is harder to maintain than it sounds, particularly when the data is going into a board presentation. But it is the only way the metric earns its place in a serious growth strategy conversation. Metrics that are managed for optics rather than insight stop being useful almost immediately. The people in the room who have run businesses know the difference.
For those building out their growth measurement approach alongside advocacy tracking, examples of how growth strategies have played out in practice can provide useful context for how organic and paid mechanisms interact over time. The pattern that emerges consistently is that brands with strong advocacy tend to see better efficiency from their paid channels, not because advocacy replaces paid, but because it creates a warmer market for everything else to work within.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
