Advertising Spend Evaluation: Stop Measuring Activity, Start Measuring Growth
Evaluating advertising spend means determining whether the money you are putting into paid media is generating real business growth, not just measurable activity. Most businesses can tell you their cost per click. Far fewer can tell you whether their advertising is actually making the company more money than it costs.
That gap is where budgets go to die quietly.
After managing hundreds of millions in ad spend across more than 30 industries, I have seen the same pattern repeat: businesses optimise for the metrics their platforms give them, rather than the outcomes their business actually needs. The result is a very clean dashboard and a very unclear picture of whether any of it is working.
Key Takeaways
- Most advertising evaluation frameworks measure platform activity, not business growth. The two are not the same thing.
- Lower-funnel performance metrics often take credit for conversions that would have happened anyway. Incrementality is the question worth asking.
- Reach matters more than most performance marketers admit. Capturing existing demand is not the same as creating new demand.
- A good evaluation framework separates what advertising caused from what it merely coincided with.
- Honest approximation beats false precision. The goal is a defensible read on effectiveness, not a perfect number.
In This Article
- Why Most Advertising Evaluation Gets the Wrong Answer
- What a Proper Advertising Spend Evaluation Actually Looks At
- The Attribution Problem Nobody Wants to Talk About
- How to Build a Framework That Actually Tells You Something Useful
- Where Reach Fits Into the Evaluation Picture
- The Role of Market Penetration in Spend Evaluation
- What Good Advertising Evaluation Looks Like in Practice
- The Honest Conversation Most Businesses Are Not Having
Why Most Advertising Evaluation Gets the Wrong Answer
Early in my career, I was obsessed with lower-funnel performance. Click-through rates, conversion rates, cost per acquisition. The numbers were clean, the attribution was tight, and it felt like control. It took me longer than I would like to admit to question whether what I was measuring was real.
The problem with lower-funnel optimisation is that a significant portion of those conversions were going to happen anyway. Someone who already knows your brand, already wants your product, and searches for it directly is not a conversion your paid search ad created. It is a conversion your paid search ad intercepted. That is a meaningful distinction, and conflating the two leads to budgets being misallocated on a grand scale.
Think about it like a clothes shop. A customer who walks in, tries something on, and then buys is far more likely to convert than one who walks past the window. The salesperson who closes that sale did not create the intent, they fulfilled it. If you credit the salesperson with the full sale and cut the marketing budget that drove the customer through the door, you are solving the wrong problem.
Performance marketing has a habit of claiming credit for the full wardrobe when it really only pressed the trousers.
This is not a criticism of performance marketing as a discipline. It is a criticism of how it is evaluated. The question is not “did someone convert after seeing our ad?” The question is “would they have converted without it?” That is an incrementality question, and most advertising evaluation frameworks do not ask it.
What a Proper Advertising Spend Evaluation Actually Looks At
A rigorous evaluation of advertising spend needs to work across three levels: efficiency, effectiveness, and incrementality. Most businesses only look at the first one.
Efficiency is the easiest to measure. Cost per click, cost per lead, cost per acquisition. These are platform metrics and they are useful for optimising within a channel. They tell you how well you are spending, not whether you should be spending at all.
Effectiveness is harder. It asks whether the advertising is achieving its intended business objective. That requires being clear about what the objective actually is. Brand awareness campaigns should be evaluated on reach, recall, and consideration shifts. Lead generation campaigns should be evaluated on lead quality and downstream revenue, not just volume. Revenue-driving campaigns should be evaluated on contribution to profit, not just top-line sales.
Incrementality is the hardest and the most important. It asks what would have happened without the advertising. This is where most businesses stop, because it requires either running controlled experiments or building econometric models, and both take time and resource. But without some read on incrementality, you are essentially flying blind on whether your advertising budget is creating value or just circulating it.
If you are building or refining your broader commercial strategy, the thinking in our Go-To-Market and Growth Strategy hub connects advertising evaluation to the wider decisions around audience, positioning, and channel mix. Spend evaluation does not sit in isolation. It is one input into a larger commercial picture.
The Attribution Problem Nobody Wants to Talk About
Attribution is the framework that tells you which touchpoints in the customer experience contributed to a conversion. It sounds straightforward. It is not.
Every attribution model is a simplification of a complex reality. Last-click attribution gives all the credit to the final touchpoint, which almost always flatters search and retargeting. First-click attribution overstates the role of awareness channels. Data-driven attribution sounds more sophisticated but is still constrained by what the platform can see, which is typically only its own ecosystem.
I have sat in too many planning meetings where someone presents a channel-by-channel attribution breakdown as though it represents the ground truth. It does not. It represents one platform’s interpretation of a partial data set, filtered through a model designed, in part, to make that platform look good. That is not cynicism. That is just how the incentives work.
The more honest approach is to treat attribution as a directional signal, not a verdict. It tells you something. It does not tell you everything. The goal is not perfect measurement. It is honest approximation.
Forrester’s work on intelligent growth models has long pointed in this direction: the businesses that evaluate marketing most effectively tend to use multiple measurement approaches in parallel rather than betting everything on a single attribution framework. You can read some of that thinking in their intelligent growth model coverage.
How to Build a Framework That Actually Tells You Something Useful
Here is how I approach advertising spend evaluation in practice. It is not a formula. It is a set of questions that, answered honestly, give you a much clearer picture than any dashboard alone.
Start with the business objective, not the media plan. What is this campaign supposed to do for the business? Not for the channel, not for the agency, not for the quarterly report. For the business. Revenue growth, customer acquisition, market penetration, brand consideration in a new segment. The evaluation criteria should flow directly from the objective. If you cannot articulate the objective in one sentence, you are not ready to evaluate the spend.
Separate new customer acquisition from existing customer retention. These are fundamentally different economic activities with different cost structures and different returns. Blending them into a single CPA figure is how you end up over-investing in retention-heavy channels while under-investing in reach. Growth requires reaching new audiences, not just re-engaging people who already know you. This is one of the most consistently underappreciated points in performance marketing, and I have watched businesses stall because they confused a healthy retention rate with a healthy growth trajectory.
Run a hold-out test before you trust your attribution model. Take a geographic region or a customer segment and go dark on one channel for a period. Measure what happens to conversion rates. If they hold steady, that channel may not be doing what you think it is. If they drop, you have some evidence of real contribution. This is not a perfect test, but it is far more informative than reading attribution reports in isolation.
Look at the full revenue picture, not just the campaign window. Short-term conversion metrics miss the compounding value of brand investment. If your advertising is building awareness and consideration among people who will buy in six or twelve months, that value will not show up in a 30-day attribution window. This is one of the structural reasons why brand budgets get cut first in a downturn. The measurement window does not capture the return.
Ask what the counterfactual looks like. If you paused this campaign tomorrow, what would change? If the honest answer is “probably not much in the short term,” that is worth investigating. If the honest answer is “our lead volume would fall significantly,” you have a signal of real contribution. The counterfactual question is uncomfortable, but it is the right one.
Where Reach Fits Into the Evaluation Picture
One of the consistent blind spots in advertising evaluation is the undervaluation of reach. Businesses that have optimised heavily into lower-funnel channels over several years often find that their performance metrics look fine while their overall growth has slowed. The pipeline is efficient but shallow.
The reason is straightforward. Performance marketing is very good at capturing people who are already in-market. It is much less effective at creating the conditions for future demand. If you are not investing in reaching new audiences who are not yet ready to buy, you are slowly depleting the pool of future customers. The performance metrics will look fine right up until the moment the pipeline runs dry.
BCG’s work on commercial transformation has made a similar point about the limits of pure performance optimisation at the expense of reach and brand investment. Their zealot’s guide to commercial transformation is worth reading if you are building a case internally for rebalancing your channel mix.
Reach is harder to evaluate than conversion because the return is deferred and diffuse. But that difficulty is not a reason to ignore it. It is a reason to build better evaluation tools for it. Brand tracking studies, awareness surveys, and search volume trends for branded terms are all imperfect proxies, but they are better than nothing.
When I was growing an agency from 20 to 100 people, one of the things I noticed about our most commercially successful clients was that they maintained investment in brand-building even when short-term performance dipped. The ones who cut brand to protect performance metrics in a tough quarter almost always paid for it in year two. Not always dramatically. Often just a quiet erosion of new customer volume that took a while to trace back to its source.
The Role of Market Penetration in Spend Evaluation
How you evaluate advertising spend should also reflect where you are in your growth cycle. A business focused on market penetration needs to evaluate spend very differently from one focused on retention and lifetime value optimisation.
For penetration-focused businesses, the relevant metrics are share of voice, new customer acquisition rate, and category reach. For retention-focused businesses, the relevant metrics shift toward repeat purchase rate, customer lifetime value, and the cost of re-engagement. Applying the same evaluation framework to both is a category error that leads to poor decisions.
Semrush has a useful breakdown of market penetration strategies that is worth referencing if you are trying to build the business case for reach-oriented spend. The framing there aligns with what I have seen work in practice: penetration requires investment in new audiences, and evaluating that investment on short-term conversion metrics will always make it look worse than it is.
Similarly, if you are evaluating spend in a growth context, Semrush’s growth hacking examples illustrate how some of the most efficient growth stories have combined paid and organic investment in ways that traditional attribution frameworks struggle to credit accurately.
What Good Advertising Evaluation Looks Like in Practice
The businesses I have seen evaluate advertising spend most effectively tend to share a few habits.
They have a clear measurement framework agreed before the campaign launches, not retrofitted after. They know what success looks like and they have committed to it. This sounds obvious. It is surprisingly rare.
They use multiple data sources in parallel rather than relying on a single platform’s reporting. Platform data, third-party analytics, CRM data, and periodic brand tracking studies all contribute to the picture. No single source is treated as definitive.
They separate media efficiency from business effectiveness. A channel can be efficient and ineffective, or inefficient and effective. Conflating the two leads to cutting channels that are building long-term value because their short-term metrics look expensive.
They ask the incrementality question regularly, even if they cannot answer it perfectly. The discipline of asking it keeps the conversation honest and prevents the gradual drift toward measuring what is easy rather than what matters.
And they are willing to sit with uncertainty. Not every marketing investment can be measured precisely. The goal is not perfect attribution. It is a defensible read on whether the money is working. That requires judgment, not just dashboards.
There is a broader body of thinking on this in the Go-To-Market and Growth Strategy hub, particularly around how channel decisions, audience strategy, and spend allocation connect to commercial outcomes. Advertising evaluation is one piece of that picture, but it makes more sense when it sits inside a coherent growth framework.
The Honest Conversation Most Businesses Are Not Having
I have judged the Effie Awards, which means I have read through hundreds of effectiveness case studies from agencies and brands who are trying to demonstrate that their advertising worked. The cases that stand out are not the ones with the most impressive ROAS figures. They are the ones where the team has been honest about what they can and cannot prove, and where the business outcome is clear and attributable to a coherent strategy rather than a favourable market tailwind.
The hardest part of advertising evaluation is not the measurement. It is the honesty. It requires being willing to say “we are not sure this is working” when the platform dashboard says everything is fine. It requires questioning whether the growth you are seeing is caused by your advertising or merely correlated with it. It requires having conversations with stakeholders that are more complicated than showing a green chart.
None of that is comfortable. But it is the only way to make genuinely good decisions about where your budget should go. And in an environment where media costs are rising and growth is harder to find, the businesses that evaluate spend honestly will have a structural advantage over those that are still optimising for the metrics their platforms want them to see.
Vidyard’s research into pipeline and revenue potential for go-to-market teams points to a consistent gap between reported marketing performance and actual revenue contribution. Their Future Revenue Report is a useful reference point if you are trying to build a business case for more rigorous spend evaluation internally.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
