Net Advocacy Score: The Growth Metric Most Teams Ignore

Net Advocacy Score measures the proportion of customers who actively recommend your brand to others, minus those who actively discourage it. Unlike Net Promoter Score, which asks about intent, Net Advocacy Score tracks observed behaviour, making it a more commercially honest signal of whether your brand is generating genuine word-of-mouth growth.

Most companies are sitting on significant unrealised growth potential in their existing customer base and have no structured way to measure or activate it. Net Advocacy Score gives you that mechanism, and when it is wired into your go-to-market strategy, it changes how you think about acquisition, retention, and the relationship between the two.

Key Takeaways

  • Net Advocacy Score measures actual referral behaviour, not stated intent, making it a more reliable predictor of organic growth than NPS alone.
  • Most brands over-invest in capturing existing demand and under-invest in the advocacy engine that creates new demand without paid media spend.
  • A low Net Advocacy Score is rarely a marketing problem. It is almost always a product, service, or experience problem that marketing cannot fix by itself.
  • Advocacy-driven growth compounds over time. A customer who refers two others who each refer two more is worth multiples of their direct lifetime value.
  • Tracking advocacy separately from satisfaction gives you an early warning system. Satisfaction can be high while advocacy stays flat, which tells you something important about emotional engagement with your brand.

Why Most Growth Metrics Miss the Point

Spend enough time inside agencies and you develop a fairly reliable instinct for which metrics actually matter to a business and which ones exist to fill slide decks. I have sat in enough quarterly reviews to know that the metrics clients care most about are often the ones that are easiest to report, not the ones that are most predictive of future revenue.

Conversion rate. Cost per acquisition. Return on ad spend. These are useful, but they all share a structural limitation: they measure what has already happened. They tell you how efficiently you captured demand that existed. They say very little about whether you are building the conditions for sustainable growth.

Net Advocacy Score sits in a different category. It is a leading indicator. When customers recommend your brand to others, they are doing something that paid media cannot replicate: lending their personal credibility to your product. That is a fundamentally different kind of trust signal, and it tends to convert at a meaningfully higher rate than any channel you can buy.

If you are thinking about how Net Advocacy Score fits into a broader commercial strategy, it belongs squarely within go-to-market and growth planning. The Go-To-Market & Growth Strategy hub on The Marketing Juice covers the wider strategic context, but this article focuses specifically on what advocacy measurement is, why most teams get it wrong, and how to make it actionable rather than decorative.

What Is Net Advocacy Score, Exactly?

Net Advocacy Score is calculated by asking customers a simple behavioural question: have you recommended this brand, product, or service to someone else in the past three to six months? You then subtract the percentage of customers who report actively discouraging others from the percentage who report actively recommending. The result is your Net Advocacy Score.

The distinction from Net Promoter Score is worth being precise about. NPS asks: “How likely are you to recommend us?” Net Advocacy Score asks: “Have you recommended us?” One measures intention. The other measures behaviour. In my experience, the gap between the two is often larger than companies expect, and that gap is commercially significant.

I have worked with businesses that had NPS scores in the high sixties and were quietly baffled about why organic referral traffic was flat and word-of-mouth was not moving the needle. When you look at actual advocacy behaviour rather than stated intent, the picture often looks quite different. People who say they would recommend something and people who actually do are not the same group, and conflating them leads to overconfidence in your brand health.

There is also a third category that NPS tends to obscure: active detractors. Not passive non-recommenders, but customers who are actively telling people to avoid you. Net Advocacy Score forces you to account for this group explicitly. A brand with 40% active advocates and 5% active detractors is in a very different position from one with 40% advocates and 25% detractors, even if both have the same NPS.

The Business Case for Taking Advocacy Seriously

Earlier in my career, I was firmly in the lower-funnel camp. I believed that if you could optimise the conversion path, reduce friction, and capture intent efficiently, you had done the job. It took me longer than I would like to admit to recognise how much of that “performance” was simply harvesting demand that would have converted anyway, often because of brand and word-of-mouth work that was happening in the background and getting no credit in the attribution model.

The clothing retail analogy has always stuck with me. A customer who tries something on is dramatically more likely to buy than one who does not. But the question worth asking is: what got them into the fitting room in the first place? For most retailers, a significant proportion of those customers walked in because someone they trusted said the brand was worth their time. That referral, which shows up nowhere in the performance data, is doing enormous commercial work.

Advocacy compounds in a way that paid acquisition does not. A customer who refers two new customers who each refer two more is worth multiples of their direct lifetime value. The compounding effect is real, and it accelerates over time in a way that increasing ad spend does not. BCG has written about this dynamic in the context of brand strategy and go-to-market planning, noting that organic growth mechanisms tend to be structurally more durable than paid acquisition at scale.

There is also a cost dimension that is easy to understate. Referred customers typically have lower acquisition costs, higher average order values, and better retention rates. When you track these cohorts separately, which most businesses do not, the economics of advocacy become very clear very quickly.

Why Net Advocacy Score Is Not a Marketing Metric

This is the part that tends to make marketing teams uncomfortable. Net Advocacy Score is in the end a measure of the total customer experience, not just the marketing experience. If your product is mediocre, your onboarding is confusing, your customer service is slow, or your pricing feels unfair, no amount of marketing effort will move your advocacy score in a meaningful or sustained way.

I have seen this play out directly. One of the turnaround situations I worked on involved a business that had invested heavily in brand marketing and loyalty programmes, but whose advocacy scores were stubbornly flat. When we dug into the customer feedback, the issue was not brand perception. It was a product quality problem that had been quietly accumulating for eighteen months. Marketing was being asked to compensate for a problem it could not fix, and the advocacy data made that visible in a way that satisfaction surveys had not.

This is why Net Advocacy Score is most useful as a cross-functional metric rather than a marketing-owned metric. It belongs in the conversation between marketing, product, operations, and customer service. When advocacy is low, the question is not “what should we say differently?” It is “what are we doing differently?” That is a harder conversation, but it is the right one.

Teams that use customer feedback loops effectively tend to identify advocacy barriers earlier, because they are capturing qualitative signal alongside quantitative scores. The combination of “what would make you recommend us?” and an actual advocacy score gives you both the number and the reason behind it.

How to Measure Net Advocacy Score Without Overcomplicating It

The measurement approach does not need to be elaborate. The core question is behavioural: “In the past three months, have you recommended [brand] to a friend, colleague, or family member?” You can add a second question: “In the past three months, have you discouraged anyone from using [brand]?” The score is advocates minus detractors as a percentage of total respondents.

A few practical considerations that matter more than the formula itself:

Timing matters. Asking for advocacy data immediately after a purchase captures a moment of peak satisfaction that may not reflect longer-term behaviour. Asking at 60 or 90 days post-purchase tends to give you a more accurate picture of whether the experience held up over time.

Segmentation is where the value lives. An aggregate Net Advocacy Score is a starting point. The real insight comes from breaking it down by customer segment, product line, acquisition channel, and tenure. A brand might have strong advocacy among long-term customers and weak advocacy among recent acquirees, which tells you something specific about the onboarding or early experience that is worth fixing.

Follow-up questions earn the insight. Asking “what prompted you to recommend us?” or “what would have made you more likely to recommend us?” is where you get the actionable signal. The score tells you where you stand. The qualitative responses tell you why and what to do about it.

Track it over time, not as a snapshot. A single Net Advocacy Score is interesting. A trend line across twelve months is useful. A trend line correlated with specific product changes, service improvements, or campaign activity is genuinely valuable for decision-making.

Do not conflate channels. Word-of-mouth advocacy, online reviews, and social sharing are related but distinct behaviours. Net Advocacy Score is most commonly applied to offline and direct referral behaviour. If you want to track social advocacy separately, that is worth doing, but keep the measurement clean rather than blending signals that mean different things.

The Relationship Between Advocacy and Growth Loops

One of the more useful frameworks for thinking about how advocacy creates compounding growth is the growth loop model. Unlike a traditional funnel, which is linear and terminates at conversion, a growth loop is circular: the output of one cycle becomes the input for the next. Advocacy is one of the most powerful growth loop mechanisms available to a brand, because it generates new customers who, if their experience is strong enough, become advocates themselves.

The practical implication is that your acquisition economics improve over time if your advocacy engine is working. You are spending less to acquire each new customer because a growing proportion of them are arriving through referral rather than paid channels. This is why businesses with strong advocacy scores can often outgrow competitors while spending less on media, a dynamic that looks counterintuitive from the outside but makes complete sense once you understand the loop.

There are well-documented examples of brands that built significant scale primarily through advocacy-driven growth loops rather than conventional paid acquisition. The growth strategies of companies like Dropbox and Airbnb are often cited in this context, where referral mechanics were embedded directly into the product experience rather than bolted on as a marketing afterthought.

The important caveat is that a referral programme does not create advocacy. It activates advocacy that already exists. If your Net Advocacy Score is low, adding a referral incentive will not fix it. You will spend money rewarding a small number of people who would have referred anyway, while the underlying experience problem remains unaddressed.

I have seen this mistake made more times than I can count. A business with a struggling word-of-mouth rate launches a “refer a friend” programme with a discount incentive, reports a short-term uptick in referrals, and concludes the problem is solved. Six months later, the referral rate has returned to its previous level because the incentive was doing the work, not genuine advocacy. The moment the incentive is removed or reduced, the behaviour reverts.

Integrating Net Advocacy Score Into Go-To-Market Strategy

The most commercially useful version of Net Advocacy Score is one that is wired into your go-to-market planning rather than sitting in a separate customer experience report that nobody reads before the quarterly business review.

There are three specific places where advocacy data should inform GTM decisions:

Audience targeting and lookalike modelling. Your highest-advocacy customers are a signal about who your brand resonates with most deeply. If you can profile them, you can use that profile to inform paid acquisition targeting, content strategy, and channel selection. The customers who love you enough to recommend you are telling you something about your best-fit audience that no demographic segmentation can replicate.

Product and experience prioritisation. When you correlate Net Advocacy Score with specific product features, service touchpoints, or customer experience stages, you get a prioritisation framework for where to invest in improvement. High advocacy customers often have a specific experience in common. Low advocacy customers often have a specific friction point in common. Both are commercially valuable signals for product and operations teams.

Channel and message strategy. Brands with strong advocacy can afford to be more patient with brand-building activity because they have an organic growth engine supplementing their paid efforts. Brands with weak advocacy need to think carefully about whether their current channel mix is addressing the right part of the problem. If you are spending heavily on performance channels to compensate for weak word-of-mouth, you are treating a symptom rather than a cause. The BCG framework for go-to-market strategy emphasises that channel decisions need to be grounded in a clear understanding of how customers actually make decisions, which includes the role of peer influence.

There is also a creator and influencer dimension worth acknowledging. When brands work with creators on campaign activity, the most effective partnerships tend to be with creators whose audiences already have some natural affinity for the brand. This is advocacy logic applied to media strategy: you are borrowing the trust that already exists between a creator and their audience rather than trying to manufacture it from scratch. The go-to-market with creators frameworks that have emerged in recent years are essentially advocacy principles applied to paid and earned media.

What a Low Net Advocacy Score Is Actually Telling You

When I was running agencies, one of the disciplines I tried to maintain was separating what a metric says from what people want it to mean. A low Net Advocacy Score is uncomfortable data, and there is a natural tendency to explain it away or attribute it to measurement methodology rather than facing what it is actually telling you.

A low score typically means one of three things, and it is worth being honest about which one applies before deciding what to do.

The first is a product or service quality problem. The experience is not good enough to generate genuine enthusiasm. Customers are satisfied in a functional sense but not delighted in the way that prompts unsolicited recommendation. This is the most common scenario and the hardest to fix because it requires investment in the product or service itself, not in communications.

The second is a visibility problem. The experience is strong, but customers do not think to recommend it because the brand is not top of mind at the moments when recommendation is most natural. This is a more tractable marketing problem, and it is where things like reminder communications, community building, and social proof mechanics can genuinely move the needle.

The third is a category problem. Some categories have naturally low advocacy rates because recommending them carries social risk or because the purchase is private by nature. Financial products, healthcare services, and personal care categories often see lower advocacy rates not because the experience is poor but because people are simply less likely to discuss these purchases openly. In these categories, the benchmark for what constitutes a strong advocacy score looks different.

Diagnosing which of these applies requires qualitative research alongside the quantitative score. The number alone does not tell you which problem you have. Vidyard’s research on pipeline and revenue potential highlights how untapped growth potential often sits in existing customer relationships rather than new acquisition, which is precisely the dynamic that advocacy measurement is designed to surface.

Common Mistakes in Advocacy Measurement

The measurement errors I see most often are not technical. They are conceptual, and they tend to produce data that looks clean but leads to bad decisions.

Measuring advocacy immediately after a positive interaction. If you survey customers right after a successful support interaction or a smooth delivery, you will get inflated advocacy scores that do not reflect the broader relationship. Advocacy that is triggered by a single positive moment is not the same as advocacy that reflects sustained brand value. Measure at consistent intervals rather than at emotionally convenient moments.

Treating all advocates as equal. A customer who recommends your brand to one close friend is doing something meaningfully different from a customer who recommends it to fifty colleagues in a professional network. Volume of referrals and quality of referrals are both worth tracking if you want to understand the commercial value of your advocacy base.

Ignoring the detractor segment. Active detractors are doing real damage to your brand in conversations you cannot see or measure directly. A brand with 35% advocates and 20% active detractors has a serious problem that aggregate satisfaction scores will not reveal. The detractor segment deserves specific attention: who are they, what went wrong, and is there a pattern?

Not connecting advocacy data to commercial outcomes. If Net Advocacy Score lives in a customer experience report and never gets connected to revenue, retention, or acquisition cost data, it will always be treated as a soft metric. The moment you can show that high-advocacy customers have a 40% higher lifetime value or a 25% lower churn rate, the metric gets taken seriously by finance and leadership. Do the work to make that connection visible.

Using it as a vanity metric. A rising advocacy score is only useful if you understand why it is rising and whether that improvement is translating into actual referral behaviour and commercial outcomes. Tracking the score in isolation, without connecting it to the actions that influenced it, is how metrics become decorative rather than useful.

Net Advocacy Score Versus Net Promoter Score: Choosing the Right Tool

NPS is not going away, and it is not useless. It has genuine value as a sentiment indicator, particularly for tracking directional change over time and benchmarking against industry norms. The problem is not NPS itself but the way it is typically used: as a proxy for growth potential when it is actually a proxy for stated satisfaction.

Net Advocacy Score and NPS are most powerful when used together rather than as alternatives. NPS tells you about sentiment and intent. Net Advocacy Score tells you about behaviour. The gap between them is diagnostic. A high NPS with a low advocacy score suggests your customers think well of you but are not sufficiently motivated to act on that sentiment. That gap points to a specific intervention: identifying what would convert positive sentiment into active recommendation.

I judged the Effie Awards for several years, and one of the things that struck me consistently was how many campaigns were built around the assumption that positive brand sentiment automatically translates into commercial behaviour. It does not. There is a step between “I like this brand” and “I told someone about it” that many brands never close, and that step is where advocacy measurement earns its keep.

The brands that close that gap most effectively tend to have two things in common: a product or service experience that genuinely exceeds expectations at key moments, and a deliberate strategy for making recommendation feel natural and low-effort for customers who are already positively disposed. Neither of those things is primarily a communications challenge. Both of them are commercial design challenges.

Building an Advocacy Strategy That Compounds

If you want advocacy to function as a genuine growth driver rather than a metric you report, it needs to be treated as a strategic priority rather than a customer experience initiative. That means it needs executive ownership, commercial targets, and a clear line of sight to revenue.

The practical steps are not complicated, but they do require cross-functional commitment:

Start by establishing your baseline. Run a properly structured advocacy survey across your customer base and segment the results by cohort, tenure, product, and channel. You need to know where you stand before you can decide where to invest.

Identify your advocate profile. Who are your highest-advocacy customers? What do they have in common in terms of how they discovered you, what they bought, how they were onboarded, and how long they have been customers? This profile is one of your most valuable strategic assets.

Find the experience moments that drive advocacy. Through qualitative research with your advocate segment, identify the specific moments in the customer experience that prompted them to recommend you. These are your advocacy triggers, and they are worth protecting and amplifying.

Fix the detractor experience. Before you invest in advocacy activation, address the experience failures that are creating active detractors. One detractor in a professional network can undo the work of multiple advocates. The cost of detractors is systematically underestimated.

Make recommendation easy. Reduce the friction between positive sentiment and active recommendation. This does not mean discounts and referral codes. It means making it simple, natural, and socially comfortable for customers to mention you when the context is right. That might be through community, content, or simply by giving people a story worth telling.

The broader strategic context for this kind of work sits within growth strategy planning, and if you are thinking about how advocacy connects to your wider commercial model, the Go-To-Market & Growth Strategy section of The Marketing Juice covers the surrounding territory in more depth. Advocacy does not exist in isolation. It is one mechanism within a broader system, and it works best when the rest of that system is coherent.

Growth hacking frameworks often reference advocacy as a core lever, and there are useful tactical examples in resources like Crazy Egg’s growth hacking overview, though the most durable advocacy strategies are less about clever mechanics and more about genuinely earning the right to be recommended.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is Net Advocacy Score and how is it calculated?
Net Advocacy Score measures the percentage of customers who report actively recommending your brand, minus the percentage who report actively discouraging others from using it. It is calculated by asking customers a direct behavioural question about whether they have recommended or discouraged use of your brand in a recent time period, typically the past 90 days. The result is expressed as a percentage, similar in structure to NPS but grounded in observed behaviour rather than stated intent.
How is Net Advocacy Score different from Net Promoter Score?
NPS asks customers how likely they are to recommend your brand on a scale of zero to ten. Net Advocacy Score asks whether they have actually recommended it. NPS measures intent and sentiment. Net Advocacy Score measures behaviour. The gap between the two is often significant, and tracking both gives you a more complete picture of your brand’s advocacy health than either metric alone.
What is a good Net Advocacy Score?
There is no universal benchmark because advocacy rates vary significantly by category. In categories where purchase decisions are private or carry social risk, such as financial services or healthcare, lower advocacy scores are structurally normal. In categories where recommendation is socially comfortable and frequent, such as restaurants or consumer technology, higher scores are more achievable. The most useful benchmark is your own trend over time and, where available, direct competitor comparison within the same category.
Can a referral programme improve Net Advocacy Score?
A referral programme can activate existing advocacy by reducing friction and providing an easy mechanism for recommendation. It will not create advocacy where none exists. If your Net Advocacy Score is low because the product or service experience is not strong enough to generate genuine enthusiasm, a referral incentive will produce a short-term uptick followed by a return to baseline once the incentive loses novelty. Referral programmes work best as an amplification tool for brands that already have a positive advocacy foundation.
How often should you measure Net Advocacy Score?
Quarterly measurement is a reasonable cadence for most businesses, with the survey timed at a consistent point in the customer lifecycle rather than immediately following a positive interaction. For businesses with high transaction volumes and short purchase cycles, monthly measurement may be appropriate. For businesses with long sales cycles or infrequent purchases, semi-annual measurement may be more practical. Consistency in timing and methodology matters more than frequency, because the trend over time is where the insight lives.

Similar Posts