Incremental Revenue: What It Means and Why Most Marketers Measure It Wrong

Incremental revenue is the additional revenue generated directly as a result of a specific marketing action, channel, or campaign, that would not have occurred without it. It is the difference between what happened and what would have happened anyway. That sounds simple. In practice, isolating it is one of the hardest problems in marketing measurement.

Most marketers are not measuring incremental revenue. They are measuring attributed revenue, which is a very different thing. Understanding the gap between those two concepts is where serious commercial thinking about marketing performance begins.

Key Takeaways

  • Incremental revenue measures what marketing actually caused, not what it was present for. Attributed revenue and incremental revenue are not interchangeable.
  • Last-click and even multi-touch attribution models routinely claim credit for sales that would have happened without any marketing intervention at all.
  • Holdout testing and geo-based experiments are the closest thing to a ground truth for incrementality, but they require planning, discipline, and a tolerance for short-term revenue sacrifice.
  • Channels with high attributed revenue and low incrementality are a budget leak. Identifying them is one of the most commercially valuable things a marketing team can do.
  • Incrementality measurement is not a one-time exercise. Customer behaviour, competitive context, and channel dynamics shift, and your baseline assumptions need to shift with them.

If you want a broader grounding in how measurement frameworks connect to commercial performance, the Marketing Analytics hub covers the full landscape, from attribution theory to GA4 implementation and beyond.

Why Attribution Is Not the Same as Incrementality

Early in my career, before I fully appreciated this distinction, I watched a paid search campaign take enormous credit for revenue that the business would have generated regardless. The brand terms were converting at scale, the cost-per-acquisition looked fantastic, and the channel dashboard was full of green numbers. What nobody was asking was how many of those customers had already decided to buy before they typed the brand name into Google.

Attribution models assign credit. They do not prove causation. A customer who sees a display ad on Tuesday, receives an email on Thursday, and converts via a branded paid search click on Friday will show up in most attribution reports as a paid search conversion, or as a shared conversion across three touchpoints. What none of those models tell you is whether removing any one of those touchpoints would have changed the outcome.

This is the core problem. Attribution theory in marketing has evolved considerably, but even the most sophisticated data-driven models are still fundamentally pattern-matching exercises. They observe correlations between touchpoints and conversions. They do not run controlled experiments. Incrementality measurement does.

The practical consequence is significant. A channel can show high attributed revenue while generating almost no incremental revenue at all. Retargeting is the classic example. If you are showing ads to people who already have your product in their basket, you are not creating demand. You are paying to be present at a moment of intent that already existed. The conversion would likely have happened anyway. Your retargeting platform claims the credit. Your incrementality test tells a different story.

How Incrementality Testing Actually Works

The most rigorous approach to measuring incremental revenue is the holdout test, sometimes called a ghost ad study or a conversion lift study. The principle is straightforward: divide your audience into two groups, expose one group to your marketing activity and withhold it from the other, then measure the difference in conversion rates between the two groups. The revenue difference, adjusted for group size, is your incremental revenue.

Geo-based holdout testing works on the same logic but at a geographic level. You select matched pairs of regions, run your campaign in one and suppress it in the other, and compare outcomes. This is particularly useful for channels where individual-level holdout is difficult to implement, such as out-of-home advertising or broad-reach digital campaigns.

When I was leading iProspect and we were scaling aggressively across multiple verticals, geo-based testing became one of the most commercially useful tools we had. Not because it was technically sophisticated, but because it gave clients a number they could defend in a board meeting. “Our paid social campaign drove £X in incremental revenue in these regions versus matched control regions” is a sentence that lands differently than “our paid social ROAS was 4.2x.”

Platforms including Meta and Google offer their own conversion lift studies, which are convenient but come with an obvious conflict of interest. They are measuring their own incrementality, using their own methodology, and reporting the results. That does not make them worthless, but it is worth understanding the incentive structure before treating the output as objective truth. Independent holdout testing, where you control the group assignment and the measurement, is more defensible.

For a complementary perspective on how measurement challenges apply to newer channel types, the article on how to measure the effectiveness of AI avatars in marketing works through similar attribution and incrementality questions in an emerging context.

The Baseline Problem

Incrementality measurement depends entirely on having an accurate baseline: what revenue would have occurred without the marketing activity in question. Getting that baseline wrong invalidates everything that follows.

Baselines are harder to establish than they look. Organic demand fluctuates. Seasonality affects customer behaviour. Competitive activity changes. A baseline established in Q1 may not be valid in Q4. A baseline that worked last year may be meaningless after a competitor entered the market or a macro event shifted consumer behaviour.

I have seen businesses make significant budget decisions based on incrementality analyses that used stale or poorly constructed baselines. The analysis looked rigorous. The methodology was sound on paper. But the baseline assumption was wrong, and so the incremental revenue figure was wrong, and so the budget decision was wrong. The maths was correct. The inputs were not.

This is why understanding what your analytics tools can and cannot tell you matters so much. If you are relying on platform data to construct your baseline, it is worth reading up on what data Google Analytics Goals are unable to track, because the gaps in your measurement infrastructure will directly affect the quality of your baseline assumptions.

A good baseline methodology typically involves one or more of the following: historical trend analysis with seasonal adjustment, matched market comparison, statistical modelling of organic demand drivers, or a pre-period holdout to establish a conversion rate before campaign launch. None of these is perfect. The goal is a defensible approximation, not false precision.

Where Incremental Revenue Thinking Changes Budget Decisions

The commercial value of incrementality measurement is not academic. It directly changes where budget should go.

When I was at lastminute.com, I launched a paid search campaign for a music festival that generated six figures of revenue within roughly a day. It looked extraordinary on every standard metric. But a meaningful portion of that revenue came from brand searches by people who had already decided to buy tickets. They were going to convert regardless. The incremental contribution of the campaign was real, but it was smaller than the headline numbers suggested. Understanding that distinction mattered for how we planned the next campaign and how we thought about the channel mix.

Channels that generate high attributed revenue but low incremental revenue are a budget leak. They feel productive because the numbers are large and the attribution model is generous. But if you ran a holdout test, you would find that most of that revenue would have arrived anyway. The practical implication is that you are spending money to take credit for demand that already existed, rather than spending money to create new demand.

Channels that generate modest attributed revenue but high incrementality are frequently underinvested. Upper-funnel activity, awareness channels, and demand-generation programmes often look weak in attribution models because they operate early in the customer experience and the credit gets assigned elsewhere. Incrementality testing can reveal that these channels are doing significant commercial work that the attribution model cannot see.

This connects directly to how you think about inbound marketing ROI. Inbound programmes, content, SEO, organic social, tend to be particularly poorly served by last-click attribution. Their incremental contribution to revenue is often substantial but invisible in standard reporting. Incrementality testing is one of the few ways to make that contribution visible.

For a practical framework on applying incrementality thinking to a specific channel, the piece on how to measure affiliate marketing incrementality is worth reading alongside this one. Affiliate is a channel where the gap between attributed and incremental revenue is often particularly wide.

Marketing Mix Modelling as a Complement to Holdout Testing

Holdout testing is the gold standard for measuring incrementality at a channel or campaign level. But it has limitations. You cannot run holdout tests across every channel simultaneously without cannibalising your own measurement. And holdout tests are point-in-time: they tell you about incrementality during the test period, not across the full year.

Marketing mix modelling, often called MMM, takes a different approach. It uses statistical regression to decompose historical revenue into contributions from different marketing channels, accounting for external factors like seasonality, pricing, and macroeconomic conditions. The output is a set of coefficients that represent the incremental revenue contribution of each channel, holding all other variables constant.

MMM has had a resurgence in recent years, partly because of the erosion of cookie-based tracking and partly because platforms like Meta have invested in making it more accessible. It is not a replacement for holdout testing. The two methodologies have different strengths and different failure modes, and the most commercially rigorous teams use them together, using holdout tests to calibrate and validate the MMM outputs.

The honest limitation of MMM is that it requires substantial historical data to produce reliable outputs, typically two or more years of weekly revenue and spend data across all channels. For businesses that have changed their channel mix significantly, or that operate in rapidly evolving categories, the historical data may not be a reliable guide to current incrementality. The model is only as good as the data it is trained on, and the assumptions baked into its structure.

Understanding how marketing metrics connect to business outcomes is foundational to making MMM outputs useful. The model produces numbers. Turning those numbers into budget decisions requires commercial judgement about what the model is and is not capturing.

Incrementality in Emerging and Non-Traditional Channels

The incrementality question becomes more complex as marketing activity extends into channels where measurement infrastructure is less mature.

Generative engine optimisation is a good example. As AI-powered search changes how consumers discover brands and products, the question of what incremental revenue can be attributed to GEO activity is genuinely difficult to answer with existing tools. The article on how to measure the success of generative engine optimisation campaigns addresses this directly, and the measurement challenges it describes are a useful illustration of why incrementality frameworks need to evolve alongside channel behaviour.

The same principle applies to influencer marketing, podcast advertising, connected TV, and any channel where individual-level tracking is limited or unreliable. In these contexts, geo-based holdout testing and MMM become even more important, because platform-reported metrics are often the only data available, and those metrics are not designed to answer the incrementality question.

The temptation in immature measurement environments is to fall back on proxy metrics: reach, engagement, share of voice. These metrics have value as indicators of activity, but they do not answer the incrementality question. They tell you that something happened. They do not tell you whether it changed revenue outcomes. Data-driven marketing frameworks help structure thinking here, but the analytical rigour still has to come from the team applying them.

Building an Incrementality Measurement Practice

Most marketing teams do not have a formal incrementality measurement practice. They have attribution reports, platform dashboards, and periodic campaign analyses. That is a starting point, not a measurement framework.

Building a genuine incrementality practice requires three things: a clear definition of what you are measuring, a methodology for establishing a counterfactual baseline, and the organisational willingness to act on findings that may be commercially uncomfortable.

That last point is the one that derails most incrementality programmes. If a holdout test reveals that your retargeting programme has near-zero incrementality, that is a finding with budget implications. Someone owns that budget. Someone championed that channel. Acting on the finding requires political will as much as analytical capability. I have been in rooms where incrementality data was quietly shelved because the conclusion was inconvenient. The measurement was not the problem. The culture was.

A practical starting point for most teams is to run a single holdout test on the channel where you have the most suspicion that attributed revenue is overstated. Retargeting, brand paid search, and loyalty email programmes are common candidates. Design the test carefully, run it for long enough to achieve statistical significance, and report the findings honestly. That first test, and how the organisation responds to it, tells you a great deal about whether a genuine incrementality practice is achievable.

For teams building out their analytics infrastructure more broadly, understanding GA4 audiences is a useful parallel workstream. The audience segmentation capabilities in GA4 can support holdout test design and help you understand which customer segments are driving incremental conversion versus which are converting regardless of marketing exposure.

There is also a useful connection to how you think about email reporting. Email marketing reporting frameworks typically focus on open rates, click rates, and attributed revenue. Layering incrementality thinking on top of those metrics, by testing whether suppressed segments convert at similar rates, gives you a much clearer picture of what your email programme is actually contributing to revenue.

The full picture on how these measurement approaches connect sits within a broader analytics discipline. If you are building or refining your measurement framework, the Marketing Analytics hub covers the methodologies, tools, and commercial frameworks that underpin serious performance measurement.

What Incremental Revenue Measurement Actually Requires of You

Incrementality measurement is not a tool you buy or a report you run. It is a way of thinking about marketing performance that treats causation as a question to be tested rather than an assumption to be made.

It requires intellectual honesty about what your current measurement tells you and what it does not. It requires willingness to design experiments that may produce inconvenient findings. And it requires the commercial confidence to act on those findings, even when they challenge channels or programmes that have organisational momentum behind them.

Having judged the Effie Awards, I have seen the full spectrum of how marketing effectiveness is evidenced. The entries that stand out are not the ones with the most impressive attributed numbers. They are the ones where the team has genuinely isolated what their marketing caused, separated it from what would have happened anyway, and connected that incremental effect to a commercial outcome. That discipline is rarer than it should be. It is also more valuable than almost anything else a marketing team can develop.

Most marketing metrics, as I have argued consistently, are useful in context and meaningless on their own. Incremental revenue is different. It is the number that tells you whether your marketing is creating value or just measuring the value that was already there.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between incremental revenue and attributed revenue?
Attributed revenue is the revenue that an attribution model assigns to a marketing channel or touchpoint based on its presence in the customer experience. Incremental revenue is the revenue that would not have occurred without that marketing activity. The two figures can be very different. Attribution models observe correlation between touchpoints and conversions. Incrementality measurement tests whether removing the touchpoint would have changed the outcome. A channel can show high attributed revenue and low incremental revenue if it is reaching customers who would have converted anyway.
How do you run an incrementality test?
The most common approach is a holdout test. You divide your target audience into two groups: one that receives the marketing activity and one that does not. After a defined test period, you compare conversion rates and revenue between the two groups. The difference, adjusted for group size, represents your incremental revenue. Geo-based holdout tests apply the same logic at a regional level, running the campaign in some markets and suppressing it in matched control markets. Both approaches require careful group matching, sufficient test duration to achieve statistical significance, and clean measurement of outcomes in both groups.
Which marketing channels typically have the lowest incrementality?
Retargeting, branded paid search, and loyalty email programmes are the channels most frequently found to have lower incrementality than their attributed revenue suggests. This is because they tend to reach audiences with high existing purchase intent, people who have already visited your site, searched for your brand, or are existing customers. These audiences often convert regardless of whether the marketing activity is present. That does not mean these channels have zero value, but it does mean their incremental contribution is often smaller than standard attribution models imply.
What is marketing mix modelling and how does it relate to incrementality?
Marketing mix modelling is a statistical technique that uses regression analysis to decompose historical revenue into contributions from different marketing channels and external factors. It estimates the incremental revenue contribution of each channel by holding other variables constant. It is complementary to holdout testing rather than a replacement for it. Holdout tests provide precise incrementality measurements for specific campaigns or channels at a point in time. MMM provides a view of incrementality across all channels over a longer historical period. The most strong measurement approaches use both, with holdout tests helping to validate and calibrate MMM outputs.
Why does incrementality measurement matter for budget allocation?
Budget allocation based on attributed revenue alone tends to over-invest in channels that are good at claiming credit and under-invest in channels that generate demand earlier in the customer experience. Incrementality measurement corrects for this by identifying which channels are actually causing revenue rather than simply being present when it occurs. Channels with high incrementality and modest attributed revenue are often underinvested. Channels with high attributed revenue and low incrementality are often over-funded. Shifting budget based on incremental contribution rather than attributed contribution is one of the most commercially significant things a marketing team can do.

Similar Posts