Incremental Sales: Are You Measuring What Marketing Caused?
Incremental sales are the revenue your marketing activity directly caused, stripped of everything that would have happened anyway. A customer who would have bought regardless of your campaign is not a win for that campaign. The gap between total sales and baseline sales is where the real measurement conversation begins.
Most marketing teams are not measuring this. They are measuring correlation and calling it causation. That distinction costs businesses real money, and it has done for as long as marketing has had dashboards to hide behind.
Key Takeaways
- Incremental sales measure what marketing caused, not what happened while marketing was running. The difference is the entire point of measurement.
- Attribution models assign credit but do not prove causation. Incrementality testing does. Both have a role, but they answer different questions.
- Holdout testing and geo-based experiments are the most reliable methods for measuring true incrementality, but they require discipline and patience most teams underestimate.
- Channels that look strong on last-click attribution are often capturing demand that already existed. Channels that look weak are sometimes the ones creating it.
- The goal is honest approximation, not false precision. A directionally correct incrementality framework is more valuable than a technically sophisticated model that measures the wrong thing.
In This Article
- Why Most Reported Sales Numbers Overstate Marketing’s Contribution
- What Incrementality Testing Actually Involves
- The Channels Most Likely to Overstate Their Contribution
- How to Set Up a Basic Incrementality Framework
- Incrementality and Inbound: A Frequently Overlooked Application
- Emerging Channels and the Incrementality Question
- The Commercial Case for Getting This Right
I spent years watching agency clients celebrate revenue numbers that looked impressive on a slide and told almost nothing about what the marketing had actually done. When I was at iProspect, growing the business from around 20 people to over 100, one of the hardest commercial conversations was helping clients understand that their reported return on ad spend was not the same as their incremental return. The numbers looked better before you asked the uncomfortable question.
Why Most Reported Sales Numbers Overstate Marketing’s Contribution
There is a structural problem baked into the way most marketing teams report performance. The default measurement approach, whether it is last-click attribution, first-touch, or even a blended multi-touch model, assigns credit to touchpoints. It does not test whether those touchpoints changed behaviour. That is a meaningful distinction, and it matters more in some channels than others.
Brand search is the clearest example. A customer decides to buy your product, types your brand name into Google, clicks a paid search ad, and converts. The paid search campaign gets credited with a sale. But that customer had already made the decision. The ad intercepted an intent signal it did not create. You would likely have captured that sale through organic search anyway, possibly at zero marginal cost.
I saw this pattern clearly when I launched a paid search campaign for a music festival at lastminute.com. The revenue numbers that came back within the first day were striking, six figures from what was a relatively straightforward campaign. It felt like a win. But the honest question, which I learned to ask more rigorously over the years, was how much of that revenue was genuinely incremental, and how much of it was demand that already existed and would have converted through another route. The answer shapes whether you scale the budget or hold it steady.
Understanding attribution theory in marketing helps clarify why no single attribution model solves this problem. Attribution models distribute credit across touchpoints based on rules or algorithms. They are useful for understanding the customer experience. They are not designed to tell you whether any given touchpoint changed the outcome. That requires a different methodology entirely.
What Incrementality Testing Actually Involves
Incrementality testing, at its core, is an experiment. You expose one group to your marketing activity and withhold it from a comparable control group. The difference in conversion rates or revenue between the two groups is your incremental lift. It is conceptually simple and operationally demanding.
There are three main approaches used in practice.
Holdout Testing
You randomly assign a portion of your audience, typically 10 to 20 percent, to a holdout group that does not see your ads. You compare their behaviour against the exposed group over the same period. The lift in conversion rate among the exposed group, adjusted for baseline behaviour, is your incremental effect. Most major ad platforms now offer some version of this natively, though the methodology varies and the devil is always in the setup.
Geo-Based Experiments
You run your campaign in some geographic markets and not others, then compare sales performance across regions. This approach is particularly useful for channels where individual-level holdouts are difficult to implement, such as TV, out-of-home, or broad digital display. The challenge is finding genuinely comparable markets and controlling for external variables. It is not a clean experiment, but it is often the most practical one available.
Marketing Mix Modelling
MMM uses historical data to model the relationship between marketing spend and sales outcomes, accounting for external factors like seasonality, economic conditions, and competitor activity. It is better suited to understanding long-run channel contributions than measuring the incrementality of a specific campaign. It is also resource-intensive to build and maintain correctly. Most teams that think they have a working MMM actually have a spreadsheet with a lot of assumptions baked in.
For teams measuring affiliate channels specifically, the methodology has its own nuances. The question of whether an affiliate drove a sale or simply intercepted a customer who was already on their way to converting is one of the more contested measurement problems in performance marketing. How to measure affiliate marketing incrementality covers this in more detail, including how to structure holdout tests within affiliate programmes without disrupting publisher relationships.
The Channels Most Likely to Overstate Their Contribution
Not every channel has the same incrementality problem. Some are structurally more likely to claim credit for sales they did not cause.
Retargeting is the most common offender. Showing an ad to someone who visited your site, added something to their basket, and is clearly close to converting will generate high reported conversion rates. It will also frequently claim credit for sales that were going to happen regardless. The customer was already in the funnel. You may have accelerated the conversion by a day, or you may have simply been present when it happened. Those are different outcomes with different budget implications.
Brand keyword bidding is the second. As covered above, bidding on your own brand terms captures high-intent traffic that often would have found you anyway. The incrementality of brand search spend is one of the most reliably overestimated numbers in performance marketing, and it is one of the first things I look at when reviewing a new client’s account.
Email to existing customers follows a similar logic. If a customer who has bought from you three times in the past receives a promotional email and makes a purchase, was that email the cause? Or were they already in a purchase cycle? The honest answer is usually somewhere in between, and the only way to approximate it is through holdout testing within your email programme.
Forrester has written about the difference between sales measurement and marketing measurement, and the tension between the two is relevant here. Sales teams measure what happened. Marketing teams need to measure what they caused. Those are not the same exercise, and conflating them is where a lot of budget decisions go wrong.
How to Set Up a Basic Incrementality Framework
You do not need a data science team to start measuring incrementality more honestly. You need a clear question, a clean experiment, and the patience to let it run long enough to be meaningful.
Start with your highest-spend channel. That is where the stakes are highest and where overstatement is most costly. Define your hypothesis: what do you believe this channel is contributing beyond baseline behaviour? Then design the simplest possible test that can answer that question.
For a holdout test, the practical steps are: define your audience pool, randomly assign a holdout group of sufficient size to detect a meaningful lift, suppress the campaign for that group, run the test for long enough to capture a representative purchase cycle, and compare conversion rates between exposed and holdout groups. The lift percentage is your incrementality estimate.
There are things GA4 can help you track here and things it cannot. Understanding what data Google Analytics goals are unable to track is relevant context when you are trying to build an incrementality picture. GA4 can tell you what happened in sessions. It cannot tell you what would have happened in the absence of a touchpoint. That counterfactual is the piece you have to build through experimental design, not through the analytics platform alone.
For broader guidance on building a measurement infrastructure that supports this kind of analysis, the Marketing Analytics hub covers the full stack, from GA4 configuration to attribution frameworks to channel-level measurement approaches.
Incrementality and Inbound: A Frequently Overlooked Application
Most incrementality conversations happen in the context of paid media. But the question applies equally to inbound channels, and the answers are often more interesting.
If you publish a piece of content that ranks and drives organic traffic, the incremental question is whether that traffic converted customers who would not have found you otherwise, or whether it captured demand that would have arrived through a different route. The answer depends on your category, your competitive position, and how much of your organic traffic is branded versus non-branded.
Understanding inbound marketing ROI in incremental terms requires you to think about the counterfactual: if this content did not exist, what would have happened? Would those customers have found a competitor? Would they have searched differently and still found you? Would they have not searched at all? These are not easy questions, but they are the right ones.
The same logic applies to newer channels. As AI-generated answers increasingly appear in search results, the question of whether your content is driving incremental discovery or simply appearing in a results page the user was already handling changes. How to measure the success of generative engine optimisation campaigns addresses some of these emerging measurement challenges, including how to think about attribution when the user experience does not follow a traditional click path.
Semrush has a useful breakdown of content marketing metrics that covers engagement and visibility signals, though the incrementality layer requires you to go beyond what any single analytics tool will surface automatically.
Emerging Channels and the Incrementality Question
Every time a new channel or format enters the media mix, the incrementality question resets. The channel might reach audiences your existing activity does not. Or it might reach the same audiences through a different surface, adding frequency without adding reach or conversion lift.
AI avatars and synthetic media are a current example. Brands are beginning to use AI-generated spokespersons and personalised video content at scale. The question of whether these formats drive incremental sales above and beyond standard creative is genuinely open. How to measure the effectiveness of AI avatars in marketing covers the measurement framework for this, including how to isolate the format effect from the audience and placement effects.
The pattern repeats across channel innovation. When conversion tracking for paid search was still being standardised, as Search Engine Land documented in the early development of AdWords conversion tracking, the industry was still working out how to attribute sales to clicks at all. The incrementality question came later, once attribution became table stakes. We are at a similar early stage with several emerging formats now.
My view, shaped by two decades of watching new channels get oversold and underscrutinised, is that the incrementality test is the most useful filter you can apply to any new channel investment. Not: does it drive reported conversions? But: does it drive conversions that would not have happened otherwise?
The Commercial Case for Getting This Right
When I was turning around a loss-making agency business, one of the first things I did was look at where client budgets were going relative to where value was actually being created. The two did not always align. Channels that looked productive on reported metrics were sometimes doing very little in incremental terms. Channels that looked modest on a dashboard were often doing the heavy lifting.
The commercial implication is straightforward. If you are spending budget on activity that is not driving incremental sales, you are either wasting money or, at best, spending it inefficiently. Reallocation based on incrementality data tends to improve overall return without requiring an increase in total spend. That is a compelling internal argument in almost any business environment.
It also changes how you have conversations with channel partners. When you can demonstrate that you are measuring incrementality, not just reported conversions, you signal a level of commercial sophistication that changes the dynamic. Partners who know you are testing lift will behave differently than partners who know you are only looking at last-click numbers. That is a useful structural advantage.
Mailchimp’s overview of marketing metrics is a reasonable starting point for teams building out their measurement vocabulary, though incrementality sits above the standard metrics layer and requires a more deliberate experimental approach to surface correctly.
There is also a planning dimension. If you know which channels are genuinely incremental, you can model the revenue impact of scaling them with more confidence. If you are working from reported ROAS figures that include non-incremental sales, your scaling models will systematically overestimate the return on additional spend. That is the kind of error that looks fine in a forecast and painful in a P&L.
For teams building out a broader analytics capability, the Marketing Analytics hub covers the measurement frameworks, tools, and methodologies that sit underneath an incrementality programme. Getting the infrastructure right is a prerequisite for getting the measurement right.
The early days of my career taught me something that has stayed relevant across every role since. When I could not get budget for a new website and taught myself to code instead, the lesson was not about resourcefulness for its own sake. It was about understanding the problem clearly enough to solve it without waiting for permission. Incrementality measurement has the same character. You do not need a perfect setup or an unlimited budget for testing. You need a clear question, a willingness to run a controlled experiment, and the honesty to act on what you find.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
