Advertising Spend Evaluation: Stop Measuring What’s Easy

Evaluating advertising spend well means separating what your investment actually caused from what would have happened anyway. Most businesses skip that distinction entirely, which means their budget decisions are built on a foundation of flattering but unreliable numbers.

The mechanics are not complicated. The discipline is. You need clear commercial objectives, honest attribution, and the willingness to defund channels that look good on paper but are not driving incremental growth. That last part is where most organisations stall.

Key Takeaways

  • Most advertising measurement frameworks reward channels that capture existing demand rather than channels that create new demand, which distorts budget allocation over time.
  • Last-click and platform-reported attribution are perspectives on performance, not accurate records of it. Treat them accordingly.
  • Incrementality, not efficiency, is the correct lens for evaluating whether advertising spend is working.
  • Lower-funnel channels will almost always look better in a standard attribution model than they deserve to, because they intercept intent that was already forming.
  • A budget review that only asks “what performed?” without asking “what would have happened without it?” is not an evaluation. It is a rationalisation.

Why Most Advertising Evaluations Measure the Wrong Thing

Spend enough time inside agency P&Ls and client budget reviews and a pattern becomes obvious. Brands consistently over-invest in channels that look efficient and under-invest in channels that are actually growing the business. The two are not always the same thing.

Earlier in my career, I was guilty of this myself. I overvalued lower-funnel performance metrics because they were clean, reportable, and made clients feel good in monthly reviews. Paid search returning a 4:1 ROAS looks compelling on a slide. What that slide does not show is how much of that return was driven by people who were already going to buy, and who would have found their way to the brand regardless of whether the ad appeared.

The clothes shop analogy is useful here. Someone who walks into a store and tries something on is far more likely to buy than someone who has not. If you only count the people who tried something on, your conversion rate looks excellent. But you have not measured the work that got them through the door in the first place. Lower-funnel advertising has the same problem. It intercepts intent that was already forming, often created by brand activity that never gets the credit.

This is not a criticism of performance marketing. It is a criticism of how it is measured and what conclusions are drawn from those measurements. If you want to evaluate advertising spend honestly, you have to start by accepting that your current measurement framework probably flatters certain channels at the expense of others.

For a broader view of how spend evaluation fits into commercial planning, the Go-To-Market and Growth Strategy hub covers the full picture, from channel selection through to budget architecture and measurement frameworks.

What Incrementality Actually Means in Practice

Incrementality asks a simple question: did this advertising cause something to happen that would not have happened otherwise? It sounds obvious. Almost no one applies it rigorously.

The gold standard for measuring incrementality is a controlled experiment. You run advertising to one group and withhold it from a comparable group, then measure the difference in outcomes. This is how the most sophisticated advertisers in the world evaluate their spend, and it consistently produces results that look different from what platform dashboards report.

Most businesses cannot run controlled experiments at scale, and that is fine. But there are practical proxies. Geographic holdout tests, where you pause spend in one region and compare it against active regions, are accessible to most advertisers. Time-based analysis, looking at what happens to sales when a channel goes dark, gives you a rough read on dependency. Neither is perfect. Both are more honest than accepting platform-reported attribution at face value.

I have run budget reviews where pausing a high-ROAS paid search campaign for four weeks produced no measurable decline in revenue. The traffic came through organic instead. The business had been paying to intercept its own brand searches. That is not a failure of the channel. It is a failure of evaluation. The spend looked productive right up until someone tested whether it actually was.

The Attribution Problem Is Not Going Away

Attribution is not a solved problem, and anyone who tells you otherwise is selling you something. Every model, from last-click through to data-driven, is a set of assumptions dressed up as analysis. The assumptions differ. The fundamental limitation does not.

Last-click attribution remains the default for many businesses because it is simple and available. It assigns all credit for a conversion to the final touchpoint before purchase. This systematically favours retargeting, branded search, and email, because those channels tend to appear at the end of the path. Channels that do the early work of building awareness and consideration get nothing.

Data-driven attribution sounds more sophisticated, and in some respects it is. But it is still operating within the closed loop of what the platform can observe. It cannot account for the TV ad someone saw, the word-of-mouth recommendation they received, or the brand familiarity built over years of consistent presence. It measures what it can measure and calls that the full picture.

Marketing mix modelling is the most honest approach available at scale, because it works from actual business outcomes rather than click-path data. It is not cheap to do properly, and it requires clean data over a meaningful time period. But it is the only method that can account for offline media, external variables like seasonality and competitor activity, and the long-term effects of brand investment. For businesses managing significant spend across multiple channels, it is worth the investment. Forrester’s work on go-to-market measurement reinforces how often organisations underestimate the complexity of attributing outcomes accurately across channels.

The practical takeaway is this: use attribution as a directional tool, not a definitive one. It tells you something. It does not tell you everything. Build your evaluation framework around that honest limitation rather than pretending the model is more precise than it is.

How to Structure an Honest Budget Review

A budget review that only asks which channels delivered the best reported ROAS is not an evaluation. It is a confirmation of whatever your attribution model was already predisposed to show. A useful budget review asks harder questions.

Start with the commercial objective, not the channel data. What was the business trying to achieve? New customer acquisition, retention, category entry, revenue against a specific target? The objective determines what success looks like. Without it, you are just sorting channels by efficiency metrics that may or may not connect to anything that matters commercially.

Then ask what the spend actually reached. Not impressions or sessions, but real people. How many of them were genuinely new to the brand? How many were existing customers being served ads they did not need? One of the most consistent findings when I have audited channel performance across clients is that a significant portion of retargeting spend is reaching people who were already in the purchase process and would have converted without the nudge. That spend is not worthless, but it is not as valuable as it appears.

Next, look at what happened when spend was paused or reduced. If you have any periods where a channel went dark, even briefly, that data is valuable. A channel that genuinely creates demand will show a measurable impact when it disappears. A channel that mostly captures existing intent will show very little.

Finally, look at the full funnel, not just the bottom of it. If brand search volume is growing, if direct traffic is increasing, if organic conversion rates are improving, something upstream is working. A budget review that ignores those signals and only optimises toward last-touch efficiency will gradually defund the channels responsible for creating the demand it is then paying to capture.

Tools like the growth frameworks documented by SEMrush are useful for understanding how channel investment connects to organic and paid outcomes over time, particularly for businesses where search visibility is a meaningful part of the acquisition mix.

The Efficiency Trap: When Good Metrics Lead to Bad Decisions

There is a version of advertising optimisation that looks extremely disciplined and produces steadily declining business results. It works like this: you cut anything that does not hit your efficiency threshold, the remaining channels look even better because they are capturing the demand that the cut channels helped create, you cut more, and the cycle continues until you have a very efficient advertising programme that is slowly starving the business of new customers.

I have watched this happen. The business in question had genuinely impressive ROAS figures. Their cost per acquisition was falling quarter on quarter. Their brand awareness, measured separately, was also falling. New customer acquisition had plateaued. The efficiency gains were real. The growth was not.

The efficiency trap is particularly dangerous for businesses that have been advertising for several years. The brand equity built during earlier, less efficient periods is doing invisible work. The channels that look efficient now are partly benefiting from that earlier investment. Cut the brand-building activity, and the efficiency of the lower-funnel channels will eventually follow it down.

This is one of the reasons I find BCG’s thinking on scaling relevant to advertising evaluation. The discipline required to maintain investment in activities whose returns are long-term and hard to attribute directly is genuinely difficult, especially when short-term efficiency metrics are available and visible. It requires a leadership team that understands what the numbers are and are not measuring.

Reach and Frequency: The Variables Most Evaluations Ignore

Effective advertising requires reaching enough people, enough times, to change their behaviour. Most budget evaluations focus entirely on conversion metrics and ignore whether the advertising is actually reaching new people at meaningful frequency.

Reach is the most undervalued metric in digital advertising. The ability to target precisely has led many advertisers to fish in smaller and smaller ponds, reaching the same people repeatedly while ignoring the much larger pool of potential customers who have never encountered the brand. This feels efficient because the targeted audience converts at a higher rate. It is not efficient at a business level, because it does nothing to grow the addressable customer base.

When I was growing a team from around 20 people to over 100, one of the things that became clear was that the clients with the healthiest long-term trajectories were the ones investing in broad reach alongside their performance activity. Not because broad reach was easy to justify in a monthly report, but because it was doing the work that made the performance activity productive over time. The clients who cut brand spend to fund more performance activity consistently struggled to maintain growth past a certain point.

A useful evaluation question is: what percentage of this campaign’s spend reached people who had not previously been exposed to the brand? If that number is very low, the campaign is working the existing audience hard and largely ignoring new ones. That may be appropriate for a retention objective. It is not appropriate for a growth objective.

Platforms like creator-led campaigns have become one of the more effective ways to extend genuine reach into new audiences, particularly for brands where traditional display has become heavily saturated within the core target segment.

What a Useful Advertising Evaluation Actually Produces

A well-run advertising evaluation should produce three things: a clear view of which spend is creating incremental value, a clear view of which spend is capturing value that would have been created anyway, and a set of decisions about what to do differently as a result.

The first category is worth protecting and growing. The second category is not worthless, but it should be sized appropriately rather than allowed to absorb budget that could be creating new demand. The third category is the one most organisations skip. They run the evaluation, note the findings, and then carry forward broadly the same budget allocation because changing it requires difficult conversations.

Having judged the Effie Awards, I have seen the work that actually drives business outcomes. It is rarely the most efficient work in a narrow attribution sense. It is usually the work that reached the right people at scale, built a genuine connection with the brand, and created the conditions for conversion over time. The measurement frameworks that most businesses use would not have predicted its success in advance, and some would have cut it mid-flight.

The honest version of advertising evaluation is uncomfortable. It requires accepting that some of your best-looking numbers are not telling the full story. It requires investing in measurement approaches that are more complex and less flattering than platform dashboards. And it requires making budget decisions that cannot always be justified by the numbers in front of you, because the numbers in front of you are incomplete.

That discomfort is the work. Avoiding it is how businesses end up with advertising programmes that look excellent in reporting and produce diminishing returns in the real world.

If you are working through broader questions about how advertising investment fits into your overall commercial strategy, the Go-To-Market and Growth Strategy hub covers the frameworks that connect spend decisions to business outcomes, from channel architecture through to growth planning.

Growth hacking tools and tactics, as covered by SEMrush’s breakdown of growth tools, can play a role in identifying where spend is underperforming, but they are most useful when the underlying evaluation framework is sound. Tools do not fix a measurement problem. They amplify whatever assumptions you started with.

And if you are using feedback loops to understand how audiences are responding to advertising over time, Hotjar’s thinking on growth loops offers a useful lens on how qualitative signals can complement the quantitative picture your attribution model provides.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the best way to evaluate advertising spend?
The most honest evaluation combines incrementality testing, which measures what the advertising actually caused, with marketing mix modelling for a view of channel contribution over time. Platform-reported attribution is a useful starting point but should not be treated as a definitive answer. The most important question is whether the spend created outcomes that would not have happened without it.
Why does lower-funnel advertising often look better than it is?
Lower-funnel channels like branded paid search and retargeting tend to intercept people who were already planning to buy. Standard attribution models assign them full credit for the conversion, even though the intent was often created by earlier brand activity. This makes them appear more efficient than they are, and leads to over-investment at the expense of the channels doing the upstream work.
How do you measure incrementality without running a controlled experiment?
Geographic holdout tests are the most accessible alternative. Pause spend in one region while maintaining it in comparable regions, then measure the difference in outcomes. Time-based analysis, looking at what happens to sales or traffic when a channel goes dark, also provides useful signals. Neither approach is as precise as a controlled experiment, but both are more honest than accepting platform attribution as the final word.
What is marketing mix modelling and when should you use it?
Marketing mix modelling is a statistical approach that uses historical business data to estimate the contribution of different marketing inputs to sales outcomes. Unlike click-path attribution, it can account for offline channels, external variables like seasonality, and the long-term effects of brand investment. It requires clean data over a sustained period and meaningful investment to do properly, which makes it most appropriate for businesses managing significant spend across multiple channels.
How often should you review advertising spend allocation?
A meaningful evaluation should happen at least quarterly, with a deeper review annually that looks at channel contribution over a full year rather than month-by-month performance. Monthly reporting is useful for operational decisions but too short a window to evaluate whether a channel is genuinely driving business outcomes. Decisions made purely on monthly data tend to over-optimise toward short-term efficiency at the expense of long-term growth.

Similar Posts