Customer Retention Research: What the Data Shows
Customer retention research from 2023 to 2025 points in a consistent direction: keeping existing customers is cheaper than replacing them, and the gap between high-retention and low-retention businesses compounds over time in ways that acquisition spending alone cannot fix. But the research also reveals something less comfortable. Most companies are not failing at retention because they lack the right tactics. They are failing because they do not measure it honestly or act on what the data tells them.
This is a synthesis of what the recent research actually shows, where it holds up under scrutiny, and where the headline findings deserve more scepticism than they typically receive.
Key Takeaways
- Retention research from 2023 to 2025 consistently shows that churn is underreported inside most organisations because the measurement frameworks are too narrow.
- Propensity modelling and behavioural signals are now accessible to mid-market businesses, not just enterprise, and the gap in adoption is a competitive opportunity.
- Cross-sell and upsell programmes drive disproportionate revenue from retained customers, but only when they are built on purchase behaviour, not demographic assumptions.
- Churn surveys remain one of the most underused tools in retention, despite being low-cost and high-signal when designed well.
- The strongest retention programmes share one characteristic: they are owned by someone with commercial accountability, not just a customer success team with a satisfaction target.
In This Article
- Why Retention Research Deserves Careful Reading
- What the evidence suggests About Churn Measurement
- The Cross-Sell Evidence and What It Actually Means
- What Churn Surveys Reveal That Other Data Cannot
- The Loyalty Programme Evidence: More Nuanced Than the Headlines Suggest
- Lifetime Value: The Metric That Changes How You Think About Retention
- The Accountability Gap in Retention Programmes
- Where Retention Marketing Fits and Where It Does Not
Why Retention Research Deserves Careful Reading
I have spent a fair amount of time judging the Effie Awards, which means reviewing cases where marketers make claims about effectiveness. One thing you notice quickly is how selectively data gets used. A brand will cite a retention lift of 12% without telling you the baseline, the time period, or whether anything else changed in the business during that window. The number sounds meaningful. It might be. But it might also be noise dressed up as signal.
Retention research published between 2023 and 2025 has the same problem in places. Survey-based studies ask marketing leaders what they prioritise or what they believe drives loyalty. Those responses tell you about attitudes, not outcomes. Behavioural data from platforms tells you what users did inside a product or on a website, but not why they stayed or left. Aggregate industry benchmarks hide enormous variance by sector, business model, and customer segment.
None of this means the research is useless. It means you should read it the way a good analyst reads any dataset: looking for patterns that hold across multiple sources, being cautious about single-study findings, and asking whether the methodology matches the claim being made.
If you want a broader grounding in how retention fits into commercial strategy, the Customer Retention hub covers the full picture, from measurement to programme design to the acquisition-versus-retention debate that never quite goes away.
What the evidence suggests About Churn Measurement
One of the clearest findings across recent retention research is that most organisations are measuring churn too narrowly. They track cancellations or lapsed accounts, but they miss the customers who technically remain active while their spending has dropped significantly, their engagement has fallen, or they have started buying a competing product alongside yours.
I saw this pattern repeatedly when I was running agencies. A client would report strong retention numbers because their contract renewal rate was high. But when we looked at share of wallet, the picture was different. Customers were renewing, but they were spending less each year and routing new budget elsewhere. On paper, retention looked healthy. In commercial terms, the relationship was eroding.
The research from this period suggests that companies using behavioural signals, including login frequency, feature usage, support ticket volume, and purchase recency, get a more accurate picture of retention risk than those relying on contract status or satisfaction scores alone. Forrester’s work on propensity modelling makes the case that identifying at-risk accounts before they churn requires this kind of multi-signal approach, and that the window for intervention is typically shorter than most retention teams assume.
The practical implication is not that every business needs a sophisticated data science team. It is that the definition of “retained customer” needs to be broader than most CRM configurations currently allow for.
The Cross-Sell Evidence and What It Actually Means
Cross-sell and upsell performance is one of the areas where the 2023 to 2025 research is most consistent. Retained customers who have had a positive experience with one product or service are meaningfully more likely to purchase adjacent offerings than cold prospects. This is not a controversial finding. What is more interesting is the research on why cross-sell programmes fail despite that receptivity.
The pattern that shows up repeatedly is that cross-sell programmes are built around what the business wants to sell, not what the customer’s behaviour suggests they are ready to buy. Segmentation is done on demographics or account size rather than on purchase history, usage patterns, or expressed needs. The result is a programme that looks like a revenue initiative but performs like a spam campaign.
Forrester’s analysis of cross-sell measurement highlights the difficulty of attributing cross-sell revenue accurately, particularly in B2B contexts where multiple stakeholders are involved and the sales cycle is long. This matters because if you cannot measure cross-sell performance reliably, you cannot optimise it. Many businesses are running cross-sell programmes on instinct and calling the results a strategy.
When I was growing the agency from around 20 people to over 100, we had to get disciplined about which clients were genuinely expandable and which were not. Some accounts looked like cross-sell opportunities because they had budget. But the relationship was not deep enough, or the timing was wrong, or the client contact did not have the internal authority to commission new work. Treating all retained clients as cross-sell candidates regardless of context was a reliable way to damage the relationship and miss the actual opportunity.
What Churn Surveys Reveal That Other Data Cannot
Churn surveys are one of the most consistently underused tools in retention, and the research from this period makes that gap hard to ignore. When a customer leaves, they often know exactly why. The reason they give in a survey may not always be the full picture, but it is almost always more useful than the internal assumptions the business was operating on.
The problem is that most churn surveys are poorly designed. They ask leading questions, offer too few response options, or bury the most important question at the end of a long form that customers do not finish. Hotjar’s guidance on churn survey design is worth reading because it addresses the practical mechanics: when to send the survey, how to frame questions to get honest answers, and how to handle the data once you have it.
The more uncomfortable finding in the research is what churn surveys reveal about the gap between why companies think customers leave and why they actually do. Businesses tend to assume price is the primary driver of churn. Customers, when asked directly, more often cite service quality, lack of responsiveness, or the feeling that the company stopped paying attention to them after the initial sale. Price is a convenient explanation because it implies the problem is external and largely unfixable. The real reasons are often internal and fixable, which is exactly why they are harder to hear.
I have sat in enough post-mortem conversations with clients who lost a major account to know how this plays out. The account team will say the client went with a cheaper option. Sometimes that is true. But when you dig into the timeline, there is usually a moment, six or twelve months earlier, where something went wrong and nobody addressed it. The price conversation was the exit, not the cause.
The Loyalty Programme Evidence: More Nuanced Than the Headlines Suggest
Loyalty programmes attract a lot of research attention, and the findings from 2023 to 2025 are more nuanced than the industry tends to present them. The headline finding, that loyalty programme members spend more and churn less, is real. But the causal story is messier than it appears.
The customers who join loyalty programmes are not a random sample of your customer base. They are, on average, already more engaged, more frequent purchasers, and more likely to stay. When you measure the behaviour of loyalty programme members against non-members and find that members spend more, you are partly measuring the effect of the programme and partly measuring the pre-existing characteristics of the people who opted in. Separating those two effects requires controlled testing, and most loyalty programme evaluations do not do it.
A/B testing approaches to retention can help here, because they allow you to isolate the effect of a specific intervention rather than comparing groups that were never equivalent to begin with. The research suggests that businesses willing to test their retention programmes rigorously tend to find that some elements drive genuine behaviour change and others are essentially rewards for behaviour that would have happened anyway.
That does not mean loyalty programmes are a waste. It means the evaluation framework matters as much as the programme design. MarketingProfs’ framework for building loyalty that connects to profitability is useful here because it pushes past the engagement metrics and asks which loyalty behaviours actually translate to margin.
Lifetime Value: The Metric That Changes How You Think About Retention
The research on customer lifetime value from this period reinforces something that should be obvious but often is not acted on: the economics of retention look very different depending on where in the customer lifecycle you intervene.
Retaining a customer in their first year is a different problem from retaining a customer who has been with you for five years. The first-year churn risk is typically highest, the customer is still forming their view of your product or service, and the intervention cost is relatively low because you have not yet invested heavily in the relationship. Five-year customers who start showing churn signals are a different calculation: the relationship has more value, the intervention may need to be more substantial, and the reasons for disengagement are likely more complex.
Improving lifetime value is not a single programme. It is a series of decisions made at different points in the customer relationship, each of which requires a different response. The research suggests that businesses with the highest retention rates tend to have distinct approaches for different lifecycle stages rather than a single retention programme applied uniformly.
When I was managing large advertising budgets across multiple sectors, the clients who had the clearest view of lifetime value by cohort were the ones who made the best decisions about where to invest in retention versus where to accept churn and reinvest in acquisition. The clients who worked from average LTV figures were constantly making decisions that looked rational at the aggregate level but were wrong for specific segments.
The Accountability Gap in Retention Programmes
One of the more striking findings across the 2023 to 2025 research is the accountability gap in how retention is owned inside organisations. In many businesses, retention sits with customer success, which has a satisfaction target. Marketing owns acquisition. Finance owns the P&L. Nobody owns the commercial outcome of retention in a way that connects the satisfaction data, the behavioural data, and the revenue data into a single accountable view.
This is not a small structural problem. It means that retention programmes are often optimised for the wrong metric. A customer success team hitting its NPS target may be doing so while the business is losing wallet share to competitors. A marketing team running retention campaigns may be measuring email open rates while churn continues at a rate that the campaign cannot offset.
CrazyEgg’s overview of customer retention strategies touches on this when it discusses the importance of aligning retention metrics to revenue outcomes rather than engagement proxies. The point is straightforward: if the person accountable for retention is not accountable for revenue, the programme will drift toward the metrics they can control.
I have turned around loss-making businesses where this was the central problem. The retention data looked fine. Satisfaction scores were acceptable. But the commercial picture was deteriorating because nobody was connecting the customer data to the financial data and asking the right questions. The fix was not a new retention programme. It was a new accountability structure that put someone in charge of the whole chain.
Where Retention Marketing Fits and Where It Does Not
The research from this period is fairly clear that retention marketing, meaning the campaigns, communications, and programmes designed to keep customers engaged, works best when the underlying product or service is genuinely good. When it is not, retention marketing can slow churn at the margins but cannot reverse a fundamental problem with the customer experience.
This is worth stating plainly because there is a tendency in marketing to treat retention as primarily a communications problem. If we just stay in touch better, personalise more, send the right message at the right time, customers will stay. Sometimes that is true. Often, the customers who are leaving are leaving because something about the product, the service, or the value proposition has stopped working for them, and no amount of well-timed email will change that.
Retention marketing at its best is built on understanding why customers stay, not just trying to prevent them from leaving. Those are related but different questions. The first leads to programmes that reinforce genuine value. The second leads to programmes that are essentially defensive, and defensive programmes rarely build the kind of loyalty that drives long-term revenue growth.
The most commercially grounded retention strategies I have seen treat marketing as one part of a system that includes product quality, service delivery, pricing integrity, and account management. When all of those are working, retention marketing amplifies what is already there. When they are not, retention marketing is a sticking plaster on a structural problem.
For a deeper look at how all of these elements fit together, the Customer Retention hub brings the research, frameworks, and commercial thinking into one place.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
