Measuring Marketing Performance: Stop Counting What Doesn’t Count
Measuring marketing performance means connecting marketing activity to business outcomes, not just tracking the metrics that are easiest to report. Most marketing teams are not short of data. They are short of data that tells them whether what they are doing is actually working.
The measurement problem in marketing is not technical. The tools exist. The problem is that most measurement frameworks are built to justify spend rather than interrogate it, and that distinction shapes everything that follows.
Key Takeaways
- Most marketing measurement frameworks are designed to confirm activity, not evaluate impact. Fixing that distinction is where performance improvement actually starts.
- Lower-funnel metrics like click-through rate and cost-per-click measure efficiency within a channel, not the commercial value of the channel itself.
- A significant portion of what performance marketing claims credit for would have happened without the ad. Honest measurement has to account for that.
- Incremental lift, not last-click attribution, is the closest proxy for true marketing contribution to revenue.
- The goal is not perfect measurement. It is honest approximation that supports better decisions, not a dashboard that makes the team look good.
In This Article
- Why Most Marketing Measurement Is Designed to Confirm, Not Evaluate
- What Are You Actually Trying to Measure?
- The Attribution Model Problem
- How to Build a Measurement Framework That Actually Works
- The Metrics That Get Overweighted and Why
- Brand Measurement: The Part Most Teams Get Wrong
- The Role of Testing in a Credible Measurement Culture
- What Good Marketing Measurement Actually Looks Like
Why Most Marketing Measurement Is Designed to Confirm, Not Evaluate
When I was running agency teams early in my career, performance dashboards were a source of pride. Click volume up. Cost per acquisition down. Conversion rate holding steady. Everyone in the room felt good. What nobody asked, and what I was not asking either, was how much of that activity was actually driving new business versus capturing people who were already going to buy.
That is the question most measurement frameworks are not built to answer. They measure what happened inside a channel. They do not measure what would have happened without it.
Think about a clothes shop. When someone tries something on, they are far more likely to buy than someone who is browsing. That does not mean the fitting room caused the purchase. It means the person was already close to buying. Retargeting ads, branded search campaigns, and bottom-of-funnel email sequences work in exactly the same way. They find people who are already warm and take credit when those people convert. The measurement looks strong. The incremental contribution is often much smaller than it appears.
If you are thinking about how measurement fits into a broader commercial framework, the articles across the Go-To-Market and Growth Strategy hub cover the strategic context that makes measurement decisions meaningful.
What Are You Actually Trying to Measure?
Before any framework, any tool, any attribution model, you need to be honest about what question you are trying to answer. There are three distinct things marketing measurement can tell you, and conflating them is one of the most common sources of bad decisions.
The first is efficiency: how well are you using your budget within a channel? Cost per click, return on ad spend, cost per lead. These are useful operational metrics. They tell you whether a campaign is running well, not whether it is worth running.
The second is attribution: which touchpoints are associated with a conversion? This is where most measurement energy goes, and where most of the distortion lives. Attribution models are not truth. They are a set of assumptions about how credit should be distributed. Last-click attribution overstates the contribution of the final touchpoint. First-click overstates the top of funnel. Data-driven models are better but still depend on what is observable, which means they systematically underweight anything that cannot be tracked.
The third is incrementality: what would have happened without this activity? This is the hardest to measure and the most commercially important. It requires holdout testing, geo-based experiments, or media mix modelling. It is not glamorous work. But it is the only way to know whether you are generating demand or just harvesting it.
Most teams spend the majority of their measurement effort on efficiency and attribution, and almost none on incrementality. That is backwards from a business value perspective.
The Attribution Model Problem
I spent years managing large paid search and paid social accounts across a range of sectors, including financial services, retail, and travel. One of the things that became clear over time was that attribution models do not describe how customers behave. They describe how we have decided to allocate credit.
A customer who sees a display ad on Monday, clicks a Facebook ad on Wednesday, and converts via branded search on Friday has generated three data points. Each platform will claim the conversion. The CRM will record the last click. The business will report a strong week across all three channels. Nobody will ask whether the display ad did anything, or whether the Facebook click moved the customer any closer to buying, or whether the branded search was just the mechanism through which an already-decided customer completed their purchase.
This is not a data problem. It is a framing problem. When the goal of measurement is to demonstrate that marketing is working, the measurement framework will find a way to show that it is. When I was judging the Effie Awards, the entries that stood out were not the ones with the best-looking attribution data. They were the ones that could show a credible connection between the campaign and a business outcome that could not easily be explained by other factors.
That standard is harder to meet than most teams realise. It requires separating what you can observe from what you can reasonably claim.
How to Build a Measurement Framework That Actually Works
There is no single measurement framework that works for every business. But there are principles that hold across most situations.
Start with the business outcome, not the channel metric. What does the business need marketing to do? Grow revenue? Defend margin? Acquire customers in a new segment? The answer shapes which metrics matter. If the business goal is new customer acquisition, measuring return on ad spend across all customers (including repeat buyers) will give you a misleading picture of how well you are doing that job.
Separate your metrics by decision type. Some metrics exist to help you optimise campaigns in real time. Others exist to help you make budget allocation decisions. Others exist to tell the board whether marketing is contributing to the business. These are different questions and they need different metrics. Mixing them up, which happens constantly, produces dashboards that look comprehensive but support almost no useful decisions.
Build in holdout tests wherever you can. The most direct way to measure incrementality is to withhold activity from a segment of your audience and compare outcomes. This is uncomfortable because it means deliberately not marketing to some people. But without it, you are guessing at whether your activity is driving results or just accompanying them. Tools like those covered in SEMrush’s overview of growth analysis tools can support the analytical side, but the experimental design has to come first.
Use media mix modelling for brand and upper-funnel activity. For channels where individual-level tracking is not possible or is unreliable, econometric modelling gives you a statistical estimate of contribution over time. It is not precise. But it is more honest than pretending that TV or out-of-home advertising cannot be measured at all, or alternatively pretending that last-click data tells you anything useful about those channels.
Report honestly on what you do not know. This is the hardest part. When I was running a loss-making agency through a turnaround, one of the first things I changed was how we reported to clients. We stopped presenting every metric as evidence of success. We started being explicit about which numbers we were confident in and which were directional at best. It felt risky. Clients responded better than we expected. Honest approximation builds more trust than false precision.
The Metrics That Get Overweighted and Why
Some metrics dominate marketing reporting not because they are the most useful but because they are the easiest to generate and the easiest to improve. Impressions, clicks, engagement rate, follower growth, page views. These are activity metrics. They measure that something happened, not whether it mattered.
The problem is not that these metrics are useless. It is that they get reported as proxies for commercial performance when they are not. A campaign that generates 10 million impressions and a 4% engagement rate may have done nothing for revenue. A campaign that generates 50,000 impressions and a measurable lift in brand consideration among a high-value segment may have done a great deal. The first will look better in most marketing reports. The second is the one worth caring about.
I have managed hundreds of millions in ad spend across more than 30 industries. The pattern is consistent: teams gravitate toward the metrics they can control and improve, not the metrics that are hardest to move but most closely tied to business outcomes. That is human. But it produces a measurement culture that optimises for the appearance of performance rather than the reality of it.
Fixing this requires leadership to be explicit about which metrics actually matter for the business, and to resist the temptation to celebrate improvements in metrics that do not connect to anything commercial. The feedback loops that Hotjar describes in their work on growth loops are a useful frame here: the metrics you reward shape the behaviour you get, and the behaviour shapes the outcomes.
Brand Measurement: The Part Most Teams Get Wrong
Brand measurement is where measurement frameworks most commonly break down. Not because it is impossible, but because most teams default to one of two bad approaches: either they do not measure brand at all because it is “too soft”, or they use brand metrics that have no demonstrated connection to business outcomes.
Brand awareness, unaided recall, net promoter score. These are not useless, but they are only meaningful if you have established a link between movement in those metrics and movement in revenue or margin. Without that link, you are measuring something, but you do not know whether it matters.
The more useful approach is to track brand metrics alongside commercial outcomes over time, and to look for the relationship between them. Does an increase in brand consideration in a market precede an increase in conversion rates in that market? Does a decline in brand preference show up as a change in price sensitivity? These relationships take time to establish and they are not always clean. But they are the basis for credible brand measurement, as opposed to brand measurement that exists to justify brand spend.
BCG’s work on scaling commercial capability touches on the importance of building measurement disciplines that grow with the organisation rather than being retrofitted after the fact. Brand measurement is a good example of where that matters: it is much harder to establish a baseline and track change over time if you only start measuring when someone asks whether brand investment is working.
The Role of Testing in a Credible Measurement Culture
The best measurement frameworks are built around a testing culture, not a reporting culture. The difference is significant. A reporting culture asks: what happened? A testing culture asks: what happens when we change this?
Testing requires a hypothesis, a control, and a clear definition of success before the test runs. It requires the discipline to run tests long enough to be meaningful and the honesty to report results that do not confirm what you hoped. Most marketing teams struggle with the last part. When a test shows that a campaign had no incremental effect, the instinct is to question the test design rather than accept the result.
I have been in rooms where test results that showed no incremental lift were quietly shelved because they were inconvenient. That is how measurement frameworks get corrupted. The value of a testing culture is not just the individual test results. It is the discipline of treating measurement as a genuine question rather than a post-rationalisation exercise.
There are useful frameworks for structuring growth experiments, including examples covered in SEMrush’s analysis of growth approaches. The technical side of running experiments is increasingly well-supported. The harder part is building an organisational culture where honest results are welcomed rather than managed.
What Good Marketing Measurement Actually Looks Like
Good measurement is not comprehensive measurement. It is the right measurement for the decisions you need to make.
When I grew an agency from 20 to 100 people, one of the things that made that possible was getting clear on which numbers actually drove decisions, and ignoring the rest. Not because the other numbers were irrelevant, but because trying to track everything creates noise that obscures the signal. The same principle applies to marketing measurement at any scale.
A good measurement framework has three layers. At the top, two or three business-level metrics that connect marketing to commercial outcomes: revenue from new customers, contribution margin, market share in a defined segment. In the middle, a small set of leading indicators that have a demonstrated relationship with those business outcomes: brand consideration, organic search share, email list quality. At the bottom, operational metrics for day-to-day campaign management that stay in the hands of the people running the campaigns and do not dominate board-level reporting.
Most marketing dashboards invert this structure. Operational metrics at the top, business outcomes buried at the bottom or absent entirely. That inversion is not accidental. It reflects a measurement culture that prioritises what can be controlled over what actually matters.
If measurement is a gap in your current go-to-market thinking, the broader Go-To-Market and Growth Strategy section covers the commercial frameworks that make measurement decisions easier to anchor.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
