Incremental Revenue: Are You Measuring What Marketing Creates?
Incremental revenue is the revenue your marketing activity generates that would not have existed without it. Not total revenue attributed to a channel. Not last-click conversions. The actual, additional business created by a specific campaign, channel, or spend decision, measured against what would have happened anyway.
That distinction sounds simple. In practice, almost no marketing team measures it correctly, and the gap between attributed revenue and incremental revenue is often where budgets quietly disappear.
Key Takeaways
- Incremental revenue measures what marketing genuinely creates, not what it gets credit for in your attribution model.
- Attribution and incrementality are not the same thing. Confusing them leads to over-investment in channels that capture demand rather than create it.
- Holdout tests and geo-based experiments are the most reliable ways to measure true incrementality, but they require organisational patience most teams lack.
- Brand campaigns, affiliate programmes, and retargeting are the three areas where the gap between attributed and incremental revenue tends to be largest.
- Honest incrementality measurement often produces uncomfortable results. That discomfort is precisely why it is worth doing.
In This Article
- Why Attribution Is Not the Same as Incrementality
- The Channels Where Incrementality Gaps Are Largest
- How to Actually Measure Incremental Revenue
- What Good Incrementality Thinking Looks Like in Practice
- The Organisational Problem with Incrementality
- Incrementality and the Limits of Platform Data
- Building Incrementality Into Budget Decisions
I spent a significant part of my career running agencies where the business model depended on demonstrating value to clients. That creates a structural tension. Attribution models, by design, tend to show that marketing is working. Incrementality measurement asks a harder question: working compared to what? Those two questions produce very different answers, and very different budget decisions.
Why Attribution Is Not the Same as Incrementality
Most marketing teams run on attribution data. A customer clicks a paid search ad and buys. The channel gets credit. Repeat that across thousands of transactions and you have a performance report that looks convincing. The problem is that attribution models answer a different question than incrementality.
Attribution answers: which touchpoints were present in the path to purchase? Incrementality answers: which touchpoints actually caused the purchase?
For some channels, those answers overlap comfortably. For others, they diverge dramatically. Brand search is the clearest example. Someone who already intends to buy your product searches your brand name, clicks your paid ad, and converts. Your attribution model records a paid search conversion. Your incrementality analysis, if you run one, reveals that the same person would almost certainly have converted through organic search anyway. You paid for something you were going to get for free.
Understanding attribution theory in marketing is a prerequisite for understanding where it falls short. Attribution is a useful lens. It is not a measure of causation, and treating it as one is where a lot of budget gets wasted.
I have sat in enough budget reviews to know that this distinction rarely gets made explicitly. Teams present attributed ROAS figures, budgets get approved or cut on that basis, and nobody asks whether the revenue would have arrived anyway. The measurement framework at the KPI reporting level often obscures the question rather than surfacing it.
The Channels Where Incrementality Gaps Are Largest
Not every channel has the same risk of over-attribution. Some are more likely to claim credit for revenue that was happening regardless. Three areas consistently produce the largest gaps between what attribution shows and what incrementality testing reveals.
Retargeting. Retargeting campaigns almost always show strong attributed performance. They reach people who have already visited your site, expressed interest, and are therefore much more likely to convert than cold audiences. The question is whether the retargeting ad caused the conversion or whether those users were going to return and buy anyway. In my experience, the honest answer is usually somewhere in the middle, but the attribution model credits the channel with far more than it earns. Running a holdout test, where a percentage of your retargeting audience sees no ad and you compare conversion rates, typically produces a sobering result.
Affiliate marketing. Affiliate programmes are particularly vulnerable to incrementality problems because the incentive structure rewards last-click attribution. Voucher code affiliates, in particular, tend to intercept customers who have already decided to buy and simply add a discount code at checkout. Measuring affiliate marketing incrementality properly requires separating affiliates that introduce new customers from those that simply reduce your margin on existing intent. Most affiliate dashboards do not make this distinction visible.
Brand paid search. Bidding on your own brand terms can be justified in competitive markets where competitors are bidding on your brand, or where you need to control the messaging in the search result. But the default assumption that brand search campaigns generate incremental revenue is often wrong. The incremental lift from brand search, net of what organic would have delivered, is frequently much lower than the attributed figures suggest.
How to Actually Measure Incremental Revenue
There are three practical methods for measuring incrementality, each with different levels of rigour and different operational requirements.
Holdout tests. The most rigorous approach. You split your audience into a test group that sees your marketing activity and a holdout group that does not. You measure the difference in conversion rate or revenue between the two groups. The incremental revenue is the difference, scaled to your full audience. This requires either platform-level holdout functionality (Meta and Google both offer versions of this) or the ability to suppress ads to a defined audience segment. The results are reliable but the test requires discipline. You are deliberately not marketing to a group of people, which creates short-term cost. Most organisations find this psychologically difficult, which is one reason it is underused.
Geo-based experiments. If holdout tests at the audience level are not feasible, geographic experiments offer an alternative. You run your campaign in some regions and not others, then compare revenue trends. This is messier because geographic markets are not identical, but it can produce directionally useful results, particularly for TV, out-of-home, or other channels that are difficult to measure at the individual user level. The methodology is well-established in econometrics and has been used by large advertisers for decades.
Marketing mix modelling. MMM uses statistical regression to attribute revenue outcomes to different marketing inputs over time, controlling for external factors like seasonality, pricing changes, and market conditions. It does not require individual-level data, which makes it privacy-safe and applicable to channels that resist digital tracking. The limitation is that it requires substantial historical data, takes time to build, and produces outputs that are model-dependent rather than directly observed. It is more useful for strategic budget allocation than for campaign-level optimisation. Understanding the directional nature of analytics reporting is important context for interpreting MMM outputs honestly.
Early in my career, before I had the budget or the team to run formal incrementality tests, I used a much cruder proxy: I turned channels off. Not permanently, and not without tracking what happened to total revenue during the pause. It is not a controlled experiment, but watching what happens to the business when you stop spending on a channel tells you something real. Some channels, when paused, produce a noticeable drop in revenue. Others produce almost no observable effect. That observation alone is worth something, even if it is not statistically rigorous.
What Good Incrementality Thinking Looks Like in Practice
Early in my time at lastminute.com, I ran a paid search campaign for a music festival. The results were striking. Six figures of revenue within roughly a day from a campaign that was not particularly complex. What made it work was not sophisticated measurement. It was the fact that we were reaching people who had genuine intent and had not yet found the product. The channel was creating demand access, not just capturing existing intent. That is what incremental revenue feels like when it is working properly.
The contrast with brand search is instructive. Brand search at a company like lastminute.com, which had strong organic visibility and high brand recall, was capturing intent that existed regardless of the paid activity. The incrementality of that spend was genuinely questionable. The incrementality of prospecting campaigns reaching new audiences was not.
Good incrementality thinking requires you to ask, for every channel and campaign, whether you are creating demand or capturing it. Creating demand means reaching people who would not have bought without your marketing. Capturing demand means reaching people who were going to buy anyway and making sure they buy from you. Both have value, but they have different economics, and they should not be measured the same way.
This is also relevant when evaluating newer channels and formats. Measuring the effectiveness of AI avatars in marketing, for example, requires the same incrementality discipline. A new format that generates engagement metrics does not automatically generate incremental revenue. The question is always the same: what would have happened without it?
The same logic applies to content and inbound activity. Inbound marketing ROI is notoriously difficult to measure precisely because the attribution chains are long and the contribution to revenue is diffuse. That does not mean inbound is not generating incremental revenue. It means you need to be honest about the limits of your measurement and make reasonable assumptions rather than defaulting to attribution models that overstate the case.
The Organisational Problem with Incrementality
Incrementality measurement is not primarily a technical challenge. The tools exist. The methodologies are well-documented. The real obstacle is organisational.
Incrementality testing produces results that can be uncomfortable. A channel that has been receiving significant budget may turn out to be generating far less incremental revenue than its attributed figures suggest. That finding has implications for team headcount, agency relationships, and executive narratives about what is driving growth. The people responsible for those channels have an incentive to resist the measurement or to challenge the methodology.
I have been on both sides of this. As an agency CEO, I had clients whose incrementality testing would have shown that some of what we were doing was less valuable than the attributed numbers implied. The honest response is to welcome that information and adjust the strategy. The defensive response is to find reasons why the test methodology is flawed. I have seen both responses, and I know which one produces better long-term client relationships and better business outcomes.
When I grew iProspect from around 20 people to over 100, the discipline that mattered most was not finding new channels to add to client plans. It was being honest about which existing activity was genuinely working. That honesty, even when it was uncomfortable, was what built trust with clients over time. Incrementality measurement is an extension of that same discipline.
One practical way to build organisational readiness for incrementality testing is to start with channels where the results are likely to be positive. If you have a prospecting campaign reaching genuinely new audiences, an incrementality test will probably confirm its value. That builds confidence in the methodology before you apply it to more sensitive areas. It also gives you a baseline for what good incremental performance looks like in your specific business context.
Incrementality and the Limits of Platform Data
One of the structural problems with relying on platform-reported data for incrementality is that platforms have an inherent interest in showing that their advertising works. Google’s attribution models credit Google channels. Meta’s attribution models credit Meta channels. This is not a conspiracy. It is a logical consequence of how the measurement systems are built and what incentives they serve.
The limitations of any single analytics tool are real, and they matter for incrementality specifically. Understanding what data Google Analytics goals cannot track is part of understanding why platform-reported conversion data should never be your only source of truth. Cross-device journeys, offline conversions, and view-through attribution all create gaps between what platforms report and what is actually happening in your business.
The most reliable incrementality data comes from tests that are independent of platform reporting. Holdout tests where you measure revenue in your own systems, not in the platform dashboard, are more credible than platform-provided lift studies, which have their own methodological limitations. Understanding how Google Analytics works at a technical level helps you identify where the data is reliable and where it is not.
This is also relevant for newer measurement challenges. Measuring the success of generative engine optimisation campaigns involves similar questions about what platform data can and cannot tell you about actual business impact. The incrementality question, what would have happened without this activity, applies regardless of the channel or format.
For a broader view of how these measurement challenges fit together across channels and tools, the Marketing Analytics hub covers the full landscape, from GA4 configuration to attribution methodology to the organisational habits that make measurement actually useful.
Building Incrementality Into Budget Decisions
The practical application of incrementality thinking is budget allocation. If you know, or have a reasonable estimate of, the incremental revenue generated by each channel, you can make better decisions about where to spend the next pound or dollar.
This does not require perfect measurement. It requires honest approximation. A business that knows its retargeting programme is probably generating 40% of its attributed revenue as truly incremental, and its prospecting campaigns are probably generating 80%, can make better budget decisions than a business that treats all attributed revenue as equally incremental.
The process for building this into budget planning is straightforward in principle, though it requires discipline in practice. Start by categorising your channels by their likely incrementality profile. Demand-creation channels (prospecting, brand awareness, content) tend to have higher incrementality. Demand-capture channels (brand search, retargeting, voucher affiliates) tend to have lower incrementality relative to their attributed figures. Apply that understanding as a discount factor when evaluating channel performance.
Then run tests to validate or challenge those assumptions. Even a single holdout test per quarter, focused on your highest-spend channel, will give you data that improves your budget decisions meaningfully over time. Tracking the right marketing metrics at each stage of this process ensures you are measuring outcomes, not just activity.
The goal is not to achieve scientific precision. It is to make better decisions than your competitors who are running entirely on attributed data and never asking whether the revenue they are claiming credit for would have arrived anyway. That gap, between teams that measure incrementality honestly and teams that do not, compounds over time into a meaningful competitive advantage.
If you are working through how to apply this thinking across your full analytics setup, the Marketing Analytics and GA4 hub covers the measurement frameworks, tool configurations, and strategic questions that make incrementality analysis practically useful rather than theoretically interesting.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
