Customer Success Benchmarks Are Measuring the Wrong Things
A customer success benchmark tells you where your retention, expansion, and satisfaction metrics stand relative to industry norms. The problem is that most companies use benchmarks to feel good about mediocre performance rather than to diagnose what is actually broken in the customer relationship.
If your net revenue retention sits at 105% and the benchmark for your category is 100%, that gap looks like success. What it often masks is a cohort of churned customers who left quietly, offset by upsell revenue from a handful of accounts that would have expanded anyway. Benchmarks smooth over the texture of what is really happening.
Key Takeaways
- Most customer success benchmarks measure outputs rather than the underlying health of the customer relationship, which makes them useful for reporting but unreliable for decision-making.
- Net Revenue Retention is the single most commercially honest customer success metric because it captures both churn and expansion in one number, but it still requires cohort-level analysis to be actionable.
- Benchmarking against your industry category is only useful if the companies in that category have similar go-to-market motions, contract structures, and customer profiles. Broad sector averages are almost always misleading.
- The companies with the strongest customer success metrics tend to invest in product quality and onboarding before they invest in customer success headcount. Headcount cannot fix a product or proposition problem.
- Marketing and customer success are more connected than most organisations acknowledge. The promises made in acquisition campaigns directly determine how satisfied customers feel after they buy.
In This Article
- Why Most Customer Success Benchmarks Are Built on Shaky Ground
- Which Customer Success Metrics Actually Matter
- What Good Customer Success Benchmarks Actually Look Like
- The Marketing Problem Hidden Inside Customer Success Data
- How to Build a Customer Success Benchmark That Is Actually Useful
- The Headcount Trap in Customer Success
- Connecting Customer Success Benchmarks to Revenue Planning
Why Most Customer Success Benchmarks Are Built on Shaky Ground
I spent a significant part of my career working with businesses that were losing money and needed to find a path back to commercial health. One pattern I saw repeatedly was companies that were hitting industry benchmarks and still declining. Their NPS scores were in the acceptable range. Their renewal rates looked fine on paper. Their customer success team was busy. And yet the business was slowly hollowing out.
The issue was that they were benchmarking against the wrong reference points. They were comparing themselves to competitors with different sales motions, different contract lengths, different customer profiles, and different product maturity levels. The benchmark gave them permission to stop asking harder questions.
This is the fundamental problem with how most organisations use customer success data. Benchmarks are borrowed from aggregated industry surveys that mix together companies at very different stages of maturity, with very different customer bases. A SaaS company selling to enterprise IT departments and a SaaS company selling to SME marketing teams might sit in the same industry category. Their churn dynamics are completely different. Comparing their NPS scores tells you almost nothing useful about either of them.
If you want to understand where your customer success function stands, the most useful comparison is your own historical performance, broken down by customer cohort, acquisition channel, and product tier. That is harder to produce than an industry benchmark. It is also far more actionable.
Which Customer Success Metrics Actually Matter
There are dozens of metrics that get attached to customer success functions. Most of them are activity metrics dressed up as outcome metrics. Ticket resolution time, customer health scores, quarterly business review completion rates, and onboarding completion percentages all measure what the customer success team is doing. They do not reliably measure whether customers are getting value.
The metrics that carry genuine commercial weight are a shorter list.
Net Revenue Retention is the most honest single number in customer success. It measures the percentage of revenue retained from an existing customer cohort over a defined period, including expansion revenue from upsells and cross-sells, minus revenue lost to churn and contraction. An NRR above 100% means your existing customer base is growing without any new customer acquisition. An NRR below 100% means you are losing ground even before you account for the cost of acquiring new customers to replace the ones you lost.
Gross Revenue Retention strips out expansion revenue and measures pure retention. It is a harder number to manipulate and gives a clearer signal about whether customers are staying. The gap between your GRR and your NRR tells you how dependent your customer success function is on upsell activity to offset underlying churn. A large gap between the two is worth examining carefully.
Time to Value is underused and underrated. It measures how long it takes a new customer to reach the outcome they purchased the product to achieve. Businesses that shorten time to value consistently see better retention, more referrals, and higher expansion rates. It is also one of the few customer success metrics that creates a direct feedback loop into product development and onboarding design.
Customer Lifetime Value at the cohort level, not the average, shows you which customer segments are genuinely profitable over time and which ones are consuming resources without generating proportionate return. I have seen companies where the top 20% of customers by lifetime value were subsidising the bottom 40%, who churned within 18 months regardless of how much attention the customer success team gave them. The acquisition team kept buying the wrong customers because no one was feeding cohort-level LTV data back into targeting decisions.
This is part of a broader set of go-to-market decisions that determine whether a business grows sustainably or just generates activity. The Go-To-Market and Growth Strategy hub covers the commercial frameworks that connect customer success metrics to acquisition strategy, positioning, and revenue planning.
What Good Customer Success Benchmarks Actually Look Like
If you are going to use external benchmarks, the most useful ones are segmented by business model, contract value, and customer type rather than broad sector. A benchmark that tells you the median NRR for B2B SaaS companies is somewhere around 100 to 110 percent is not particularly useful if you sell mid-market contracts with annual renewals. A benchmark that shows NRR broken down by average contract value, customer segment, and go-to-market motion is considerably more useful.
Some useful reference points for orientation, without treating them as precise targets:
For product-led growth businesses where customers self-serve and expand through usage, NRR benchmarks tend to be higher because expansion is built into the product model rather than dependent on a sales or success motion. For high-touch enterprise businesses with long sales cycles and complex implementations, GRR is often a more relevant metric than NRR because expansion revenue is lumpy and contract-driven rather than continuous.
NPS benchmarks are the most widely used and the least reliable. The score itself varies enormously depending on when in the customer lifecycle you ask, how you ask, and what you do with the responses. I have seen companies with NPS scores in the 50s that had chronic churn problems and companies with NPS scores in the 30s that had excellent retention. The score is a signal worth tracking over time within your own customer base. As a cross-company comparison, it is almost meaningless without significant context.
BCG’s work on commercial transformation and go-to-market strategy makes the point that growth initiatives fail most often not because of poor execution but because of misalignment between what the business promises and what it delivers. Customer success metrics are, at their core, a measure of that alignment. When the numbers look bad, the diagnosis rarely starts in the customer success function.
The Marketing Problem Hidden Inside Customer Success Data
When I was running an agency and we were scaling hard, one of the things I noticed was that the clients who churned fastest were almost always the ones we had oversold during the pitch process. Not deliberately, but through the natural optimism that creeps into any sales conversation. We would talk about what was possible. They would hear what was guaranteed. The gap between those two things showed up in the customer success data six months later.
This is a marketing problem as much as it is a customer success problem. The promises made in advertising, in sales collateral, in onboarding communications, and in the pitch deck set the expectations that customer success teams then have to manage. If marketing is generating acquisition volume by overstating what the product does or by targeting customers who are not a good fit for the product, the customer success function cannot fix that. It can only manage the fallout.
I have always believed that if a company genuinely delivered on what it promised at every customer touchpoint, it would need considerably less marketing spend than most businesses assume. The growth would be self-reinforcing. Referrals would flow. Retention would be high. The acquisition cost per customer would fall over time. Most businesses spend on marketing to compensate for gaps in the customer experience rather than to amplify an experience that is already working.
This is not an argument against marketing investment. It is an argument for being honest about what the investment is actually doing. If your customer success metrics are weak, the first question worth asking is whether your marketing is attracting the right customers with accurate expectations. The answer is often uncomfortable.
Tools for diagnosing where in the growth funnel the problems originate have become more sophisticated. Growth analysis tools can now show you where customers are dropping off, which acquisition channels produce the highest lifetime value, and which customer segments are most likely to expand. The data is available. What most organisations lack is the willingness to act on what it shows them.
How to Build a Customer Success Benchmark That Is Actually Useful
Rather than starting with an industry benchmark and working backwards, the more useful approach is to build your own internal benchmark first and then use external data to sense-check it.
Start with cohort analysis. Segment your customer base by acquisition month, acquisition channel, customer size, and product tier. Track NRR and GRR for each cohort over 12, 24, and 36 months. This will show you which cohorts retain well and which ones deteriorate, and it will give you a baseline against which to measure the impact of any changes you make to onboarding, product, or customer success processes.
Add a time-to-value measurement. Define what “value achieved” means for your product in concrete terms, not “onboarding complete” but the specific outcome the customer purchased the product to get. Measure how long it takes each cohort to reach that point. Correlate time-to-value with 12-month retention. In most businesses, the correlation is significant enough to make time-to-value one of the most predictive metrics you have.
Build a churn analysis process. When customers leave, understand why at a level of specificity that is actually useful. “Product didn’t meet needs” is not useful. “Customer needed feature X which the product lacks” is useful. “Customer was acquired through channel Y and consistently had unrealistic expectations” is very useful. Churn analysis done well feeds directly into product roadmap, marketing targeting, and onboarding design.
Then, and only then, look at external benchmarks. Use them to identify whether you are materially out of range in any direction, not to validate performance that might be mediocre in absolute terms but looks acceptable relative to a poorly constructed peer group.
Forrester’s work on intelligent growth models is relevant here. The consistent finding across growth research is that companies which build systematic feedback loops between customer outcomes and commercial decisions outperform those that manage customer success as a standalone function. The benchmark is not the destination. It is a diagnostic tool in a larger commercial system.
The Headcount Trap in Customer Success
One of the most common responses to weak customer success metrics is to hire more customer success managers. More coverage, more touchpoints, more QBRs, more check-in calls. It is a response that feels like action and often produces no measurable improvement in retention or expansion.
I have seen this pattern in multiple businesses. The customer success team grows. The metrics do not move. The explanation given is usually that the team needs more time to build relationships with the customer base, or that the tooling is not good enough, or that the handoff from sales is broken. These things may all be true. But they are rarely the root cause of poor retention.
Poor retention is almost always a product problem, a fit problem, or an expectation problem. Products that are genuinely good at delivering the outcome they promise retain customers without requiring intensive relationship management. Products that are hard to use, slow to deliver value, or misaligned with what was sold in the acquisition process will churn customers regardless of how attentive the customer success team is.
This is not a criticism of customer success as a function. It is a criticism of using customer success headcount as a substitute for fixing more fundamental problems. The best customer success teams I have seen are ones that operate as a feedback mechanism into product and marketing, not just as a retention defence force.
BCG’s analysis of pricing and go-to-market strategy in B2B markets makes a related point about the cost of serving different customer segments. In many B2B businesses, the cost to serve smaller or more complex customers is significantly higher than the revenue they generate, and customer success headcount is the primary driver of that cost. Benchmarking customer success performance without accounting for the cost structure underneath it gives an incomplete picture of commercial health.
Revenue and retention data are only part of what you need to assess how well your go-to-market motion is working. The broader strategic picture, including how customer success connects to positioning, pricing, and growth planning, is something I cover across the Go-To-Market and Growth Strategy section of The Marketing Juice.
Connecting Customer Success Benchmarks to Revenue Planning
One of the clearest signals that a business has a mature approach to customer success is when the metrics feed directly into revenue planning rather than sitting in a separate operational report.
If you know your NRR by cohort, you can model your baseline revenue for the next 12 months with reasonable accuracy before you account for any new customer acquisition. That baseline tells you how much new business you need to hit your growth targets and therefore how much you need to invest in acquisition. It also tells you the return on investment of improving retention by even a few percentage points, which is often considerably higher than the return on equivalent acquisition spend.
Vidyard’s research on pipeline and revenue potential for go-to-market teams points to the gap between the revenue that exists within current customer bases and the revenue that companies are actually capturing. Most businesses underinvest in expansion revenue relative to new acquisition, even when the unit economics of expansion are substantially better.
This is a planning problem as much as a customer success problem. When revenue planning is built entirely around new customer acquisition, the organisation naturally directs its attention and resources toward acquisition. Customer success becomes a cost centre rather than a growth lever. The metrics reflect that prioritisation, and the benchmarks look worse than they need to.
The businesses I have seen get this right tend to have a clear view of expansion revenue as a distinct growth channel, with its own targets, its own investment logic, and its own set of metrics. They treat the existing customer base as an asset to be grown, not just a base to be maintained.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
