Advertising Performance Metrics: What You’re Measuring vs. What Matters

Advertising performance metrics tell you what happened. They rarely tell you why, or whether it was any good relative to what should have happened. The difference between those two things is where most marketing decisions go wrong.

Most marketing teams are not short of data. They are short of the right frame for interpreting it. Clicks, impressions, ROAS, CPA , these numbers have meaning, but only in context. Measured in isolation, they can make a failing campaign look successful and a strong one look weak.

Key Takeaways

  • A metric only has meaning relative to a benchmark. Absolute numbers without context are decorative, not diagnostic.
  • Lower-funnel metrics like ROAS and CPA are often measuring demand capture, not demand creation. They tell you how well you converted intent that already existed.
  • Growing while the market grows faster is not success. Relative performance against category growth is the metric most teams never look at.
  • Attribution models are a perspective on reality, not reality itself. Last-click attribution routinely overstates the contribution of bottom-funnel channels.
  • The most dangerous metric is one that looks good and is easy to report. Comfort in measurement is often a sign you are measuring the wrong thing.

Why Most Advertising Metrics Are Answering the Wrong Question

When I was running agencies, one of the most common conversations I had with clients went something like this: the numbers are up, the campaign is performing, everyone is happy. Then I would ask what the market did over the same period. Silence. Nobody had checked.

If your advertising drove a 10% increase in sales but the category grew by 20%, you did not have a good year. You lost ground while the tide was rising. That is not a performance story, it is a warning sign dressed up as a success. But because most advertising metrics are measured in absolute terms rather than relative ones, the warning goes unread.

This is one of the most persistent problems in how marketing teams use data. The metrics are not wrong in themselves. ROAS is a real number. CPA is a real number. But they only answer the question “what happened?” They do not answer “was that good enough?” and they certainly do not answer “why did it happen?”

Advertising performance measurement, done properly, requires three things: the right metrics for the right stage of the funnel, honest benchmarks to measure against, and enough intellectual honesty to question what the numbers are actually telling you.

The Funnel Is Not a Metric. It Is a Framework for Choosing Metrics.

One of the clearest things I learned from spending years managing media budgets across 30-plus industries is that the funnel stage determines which metric is meaningful. A brand awareness campaign measured on CPA is being judged by the wrong standard. A direct response campaign measured on share of voice is equally misaligned.

Upper funnel advertising, the kind designed to build brand salience and reach new audiences, should be measured on reach, frequency, brand recall, and share of voice. These are not soft metrics. They are the right metrics for the job. Measuring them on ROAS is like measuring a PR campaign on cost per lead. The instrument does not fit the objective.

Mid-funnel advertising, where you are nurturing consideration and intent, calls for engagement metrics, time spent, content interaction, return visits, and assisted conversions. These are the signals that someone is moving through a decision process, not just clicking something by accident.

Lower funnel advertising, the conversion layer, is where CPA, ROAS, conversion rate, and revenue per click become meaningful. But here is the thing most performance teams do not want to hear: a lot of what gets credited to lower-funnel advertising was going to happen anyway. Someone who has already decided to buy, who searches for your brand and clicks a paid ad, was not converted by that ad. They were already converted. The ad captured their intent. It did not create it.

This distinction matters enormously when you are allocating budget. If you pull investment from brand and upper-funnel activity, your lower-funnel numbers will look fine for six to twelve months. Then the pipeline dries up because you stopped reaching people before they had intent. By the time you notice, the damage is done.

The Metrics That Actually Predict Growth

There is a version of advertising performance measurement that is genuinely predictive rather than just descriptive. It tends to involve metrics that are harder to pull from a dashboard and easier to dismiss in a client meeting. But they are the ones that correlate with long-term commercial outcomes.

Market share movement is the most honest advertising metric available. If your advertising is working, over time you should be gaining share in your category. Not every quarter, because category dynamics are messy, but directionally and over a twelve-month rolling period. BCG has written extensively about the relationship between commercial transformation and sustained growth, and the consistent finding is that share gain is the outcome that matters most.

New customer acquisition rate is the second one I always look for. Not total customers, not revenue, but the proportion of your growth coming from genuinely new buyers. If your revenue is growing but it is all coming from existing customers spending more, your advertising is not doing the heavy lifting of demand creation. It is serving people who were already yours. That is useful, but it is not growth.

Brand search volume is an underused proxy for advertising effectiveness at the top of the funnel. When people start searching for your brand unprompted, it is a signal that your advertising has created mental availability. It is not a perfect metric, but it is directionally useful and relatively easy to track through any search analytics tool.

Category entry points, the specific buying situations your brand is associated with, are harder to measure but worth tracking through brand tracking studies. The work of Byron Sharp and the Ehrenberg-Bass Institute has made this framework more mainstream, but in my experience, very few agencies actually build it into their measurement approach. They measure what the platform gives them, not what the business needs to know.

Why Attribution Models Mislead More Than They Inform

Attribution is one of the most debated topics in digital advertising, and for good reason. The model you choose to attribute credit across touchpoints has a direct impact on where you invest next. Get it wrong and you systematically defund the channels doing the most useful work.

Last-click attribution, which still dominates in many businesses, gives 100% of the credit to the final touchpoint before conversion. In most cases, that is a branded search ad or a retargeting ad. Both of those touchpoints are reaching people who were already close to buying. They did not create the intent. They captured it.

I spent a period working with a client who had built their entire media strategy around last-click ROAS. Their branded search and retargeting campaigns looked extraordinary. Their prospecting campaigns looked weak. So they cut prospecting. Within eight months, branded search volume started declining because there was nobody new entering the top of the funnel. They had optimised themselves into a shrinking pool.

Data-driven attribution models are better, but they are not neutral. They are built on observed behaviour within the platforms that serve them, which means they tend to favour those same platforms. Google’s data-driven attribution model will, unsurprisingly, tend to distribute credit in ways that favour Google-owned touchpoints. That is not a conspiracy, it is just the natural output of a model trained on platform data.

Marketing mix modelling, done properly, gives a more honest picture. It measures the incremental contribution of each channel to business outcomes using econometric modelling rather than user-level tracking. It is slower and more expensive than dashboard attribution, but it is closer to the truth. If you are spending significant sums on advertising and you have never run an MMM, you are flying with instruments you cannot fully trust.

Tools like SEMrush’s growth analysis resources can help surface channel-level performance patterns, but they are one input into a broader picture, not the picture itself. The same applies to behavioural analytics platforms. Hotjar’s work on understanding growth loops is a useful frame for thinking about how user behaviour compounds over time, but it sits alongside, not above, business-level outcome data.

If you want a broader view of how measurement fits into go-to-market thinking, the Go-To-Market and Growth Strategy hub covers the commercial frameworks that sit behind advertising decisions, from channel selection through to performance architecture.

The Metrics That Look Good and Mean Nothing

Every experienced marketer has a list of vanity metrics they have been asked to report on. Mine includes reach figures that were never connected to a business objective, engagement rates on content that nobody bought anything from, and video completion rates on ads that ran on inventory nobody was actually watching.

Impressions are the most abused metric in advertising. A served impression is not a seen impression. A seen impression is not a recalled impression. A recalled impression is not a purchase. The chain of causality from impression to outcome is long, and each link has significant dropout. Reporting impressions as a headline number, without any connection to viewability, frequency, or audience quality, is theatre.

Click-through rate is another one. A high CTR on a display ad might mean the creative is compelling. It might also mean the ad is positioned in a way that generates accidental clicks from people trying to close it. Without looking at post-click behaviour, a CTR number tells you almost nothing about whether the advertising is working.

Social engagement metrics, likes, shares, comments, are occasionally meaningful and routinely reported as if they are. I have seen campaigns with extraordinary engagement numbers that drove no measurable commercial outcome. I have also seen campaigns with modest engagement that drove significant sales. The correlation between social engagement and business outcomes is weak enough that it should never be a primary advertising performance metric unless the objective is explicitly community-building.

The test I apply to any metric before including it in a performance report is simple: if this number went up, would the business be better off? If the answer is “not necessarily,” it is a secondary metric at best. If the answer is “probably not,” it has no place in the report.

How to Build a Measurement Framework That Actually Works

A functional advertising measurement framework starts with the business objective, not the channel. What does the business need to achieve? Revenue growth, margin improvement, new customer acquisition, category share? Every advertising metric in the framework should trace back to one of those outcomes.

From there, you define the leading indicators. These are the metrics that move before the business outcome moves, and that have a demonstrated relationship with it. New visitor volume is a leading indicator of new customer acquisition. Brand search volume is a leading indicator of conversion rate over time. Frequency-adjusted reach among your target demographic is a leading indicator of brand salience.

Then you define the lagging indicators, the outcome metrics that confirm whether the leading indicators were pointing in the right direction. Revenue, market share, new customer rate, customer lifetime value. These move slowly and are influenced by factors beyond advertising, but they are the measures that in the end validate the investment.

BCG’s research on go-to-market strategy in financial services illustrates how leading and lagging indicators need to be connected across a measurement architecture, not treated as separate reporting exercises. The same principle applies across categories.

The final component is the benchmark. Every metric needs a reference point. That reference point should be, in order of preference: your performance in the same period last year, the category average, a competitor benchmark, or an industry standard. What it should never be is an arbitrary target set without reference to any external context.

When I was growing an agency from a team of around 20 to over 100 people, one of the disciplines I embedded early was that every client performance review had to include a market context slide. Not because clients always wanted it, but because without it, we were all just congratulating ourselves on numbers that might mean nothing. The market context slide forced the conversation onto the right question: not “did we do well?” but “did we do well enough?”

Incrementality: The Test Most Advertisers Avoid

Incrementality testing is the most honest form of advertising measurement available, and it is the one most advertisers are reluctant to run. The reason is simple: it often shows that a significant portion of what you are spending was going to happen anyway.

An incrementality test works by comparing outcomes between a group exposed to your advertising and a statistically equivalent group that was not. The difference in outcomes between those two groups is the incremental effect of the advertising. Everything else, the conversions that would have happened regardless, is baseline behaviour.

The results can be uncomfortable. Retargeting campaigns, which typically show strong ROAS figures, often show modest incrementality because many of the people they reach were going to convert anyway. Prospecting campaigns, which often show weaker ROAS, frequently show stronger incrementality because they are reaching people who would not have converted without the advertising.

I have judged enough Effie Award entries to know that the campaigns with the most impressive reported ROAS are not always the ones with the most genuine commercial impact. The ones that demonstrate real incrementality, that show they moved behaviour rather than just captured it, are the ones that hold up under scrutiny.

Running incrementality tests requires holding back budget from a control group, which creates short-term opportunity cost. That is why most advertisers avoid it. But if you have never tested the incrementality of your major advertising channels, you do not actually know what your advertising is worth. You know what it correlates with. That is a very different thing.

For teams thinking about how measurement connects to broader go-to-market architecture, the Growth Strategy hub at The Marketing Juice covers how performance measurement fits alongside channel strategy, audience planning, and commercial goal-setting.

What Good Advertising Measurement Looks Like in Practice

Good advertising measurement is not about having more metrics. It is about having fewer, better ones, and the discipline to report them honestly even when they are inconvenient.

In practice, that means a reporting structure with no more than three to five primary metrics, each connected to a specific business objective. It means a clear distinction between what you are measuring as a leading indicator and what you are measuring as an outcome. It means benchmarks that reflect external reality, not internal targets set in isolation. And it means a regular conversation about what the numbers are not telling you, not just what they are.

Creator-led campaigns, which have become a significant part of many go-to-market strategies, present their own measurement challenges. Later’s work on creator-led go-to-market campaigns highlights how conversion attribution in influencer contexts requires a different approach from traditional paid media, because the touchpoints are distributed across platforms that do not always share data cleanly.

The same principle applies to any channel where the customer experience crosses platform boundaries. The measurement framework has to account for the experience as the customer actually experiences it, not as the platforms report it.

There are also growth-level tools that can help teams identify where performance is strong and where it is being overstated. SEMrush’s breakdown of growth analysis tools is a useful starting point for teams building out their measurement stack, though the tools are only as useful as the questions you are asking of them.

The honest version of advertising performance measurement is not complicated. It is just less comfortable than reporting numbers that always go up. It asks you to measure what matters rather than what is easy, to benchmark against external reality rather than internal expectation, and to test your assumptions rather than confirm them.

That discomfort is not a problem with the measurement. It is the measurement working correctly.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What are the most important advertising performance metrics to track?
The most important metrics depend on your funnel stage and business objective. For upper-funnel activity, track reach, frequency, brand recall, and share of voice. For mid-funnel, track engagement depth, return visits, and assisted conversions. For lower-funnel, track CPA, ROAS, and conversion rate. Across all stages, market share movement and new customer acquisition rate are the most honest indicators of whether advertising is driving genuine growth.
Why is ROAS a misleading advertising metric?
ROAS measures the revenue generated per pound or dollar spent on advertising, but it does not distinguish between revenue that was created by the advertising and revenue that would have happened anyway. Campaigns with high ROAS are often capturing existing intent rather than generating new demand. Without incrementality testing, a strong ROAS figure can significantly overstate the contribution of an advertising channel to business growth.
What is incrementality testing in advertising?
Incrementality testing measures the additional outcomes that occur because of advertising, compared to what would have happened without it. It works by comparing a group exposed to the advertising against a statistically equivalent control group that was not. The difference in outcomes between the two groups represents the true incremental effect of the campaign. It is the most honest form of advertising measurement available, though it requires holding back budget from a control group in the short term.
How does attribution modelling affect advertising budget decisions?
Attribution models determine how credit is distributed across the touchpoints in a customer experience. Last-click attribution gives all credit to the final touchpoint, which tends to be branded search or retargeting. This systematically overstates the value of bottom-funnel channels and understates the contribution of upper-funnel activity that created the intent in the first place. Teams that rely on last-click attribution often defund brand and prospecting campaigns, which eventually reduces the pipeline feeding into lower-funnel conversion.
What is the difference between a leading indicator and a lagging indicator in advertising measurement?
A leading indicator is a metric that moves before the business outcome changes and has a demonstrated relationship with it. Examples include brand search volume, new visitor rate, and frequency-adjusted reach among target audiences. A lagging indicator is an outcome metric that confirms whether the strategy is working over time, such as revenue, market share, or new customer acquisition rate. A functional measurement framework tracks both, using leading indicators to make in-campaign decisions and lagging indicators to evaluate long-term effectiveness.

Similar Posts