Measuring Marketing Effectiveness: Stop Counting What’s Easy
Measuring marketing effectiveness means understanding how much your marketing activity is genuinely contributing to business growth, as opposed to taking credit for outcomes that would have happened anyway. Most companies do not measure this well. They measure what is easy to measure, report what looks good, and quietly avoid the harder questions that would expose how much of their marketing spend is doing very little.
That is not cynicism. It is a pattern I have seen repeat itself across agencies, in-house teams, and boardrooms for over two decades. The measurement problem in marketing is not a data problem. It is a honesty problem.
Key Takeaways
- Most marketing measurement systems are built to confirm spend, not evaluate it. That bias produces flattering numbers and weak decisions.
- Last-click attribution systematically overstates the value of lower-funnel channels and understates the contribution of brand and awareness activity.
- Incrementality, not correlation, is the right standard for measuring marketing effectiveness. The question is always: would this have happened without us?
- Marketing mix modelling and geo-based holdout tests are the most reliable tools available, but they require investment and organisational honesty to use well.
- A measurement framework that makes your marketing look good is not the same as one that makes your marketing better.
In This Article
- Why Most Marketing Measurement Is Built to Confirm, Not Evaluate
- What Incrementality Actually Means and Why It Matters
- The Problem With Last-Click Attribution
- Marketing Mix Modelling: What It Is and When to Use It
- The Metrics That Actually Connect to Business Outcomes
- Why Growth Requires Reaching New Audiences, Not Just Capturing Existing Intent
- Building a Measurement Framework That Holds Up
- The Organisational Problem Nobody Talks About
Why Most Marketing Measurement Is Built to Confirm, Not Evaluate
Early in my career I was a true believer in performance marketing. The numbers were right there: clicks, conversions, cost per acquisition. You could see the machine working. I spent years optimising lower-funnel channels and reporting impressive returns to clients and boards who were equally pleased with the clean, direct lines between spend and outcome.
It took me longer than I would like to admit to question what those numbers were actually telling me. The conversion rates were real. The attribution was not. What I was mostly measuring was our ability to intercept people who had already decided to buy, and then claim credit for the sale.
Think about a clothes shop. Someone who picks something up and tries it on is far more likely to buy than someone browsing the rack. If you only measured transactions at the till, you might conclude that the fitting rooms were your most effective sales tool. You would be measuring something real while missing the point entirely. Most digital attribution works the same way. It finds people close to purchase and calls it a conversion.
The broader issue is structural. Marketing teams are incentivised to show return on investment, and the measurement frameworks they choose reflect that incentive. Last-click attribution, platform-reported ROAS, and vanity metrics like impressions and engagement rates all share one characteristic: they are easy to generate and easy to present. They are not easy to challenge, which is precisely why they persist.
If you are building or rebuilding a go-to-market strategy, measurement has to be part of the architecture from the start. The Go-To-Market and Growth Strategy hub covers how measurement fits alongside channel selection, audience strategy, and commercial planning, because none of those decisions hold up without a clear picture of what is actually working.
What Incrementality Actually Means and Why It Matters
Incrementality is the right standard for measuring marketing effectiveness. The question it asks is simple: would this sale, this customer, this behaviour have happened without our marketing activity? If the answer is yes, the activity is not adding value. It is adding cost.
This sounds obvious when you say it plainly. In practice, almost no one applies it consistently. When I was running an agency and managing hundreds of millions in paid media spend across thirty-plus industries, I watched clients accept platform-reported returns that bore no relationship to business performance. Revenue was flat or declining while the dashboards showed green. The gap between what the measurement said and what the business experienced was the incrementality problem, made visible.
The most reliable way to measure incrementality is through controlled experiments. Geo-based holdout tests, where you turn off spend in matched geographic markets and compare outcomes to markets where spend continues, give you a clean read on what your activity is actually contributing. They are not cheap to run and they require patience, but the insight they produce is worth more than months of attribution data.
Randomised control trials at the user level are even cleaner, though harder to execute at scale outside of large platforms. The principle is the same: create a group that does not see your marketing, compare their behaviour to a group that does, and measure the difference. That difference is your incremental effect. Everything else is correlation dressed up as causation.
The Problem With Last-Click Attribution
Last-click attribution assigns full credit for a conversion to the final touchpoint before purchase. It is the dominant measurement model in digital marketing and it produces systematically misleading results.
What it does is concentrate credit on the channels that appear at the bottom of the funnel, typically paid search and retargeting, because those are the channels people interact with closest to the moment of purchase. It strips credit from the channels that built awareness, created preference, and generated the intent that made the final click possible.
I have sat across the table from clients who were on the verge of cutting their brand investment because the attribution model showed it delivering almost no return. In every case, the brand activity was doing exactly what it was supposed to do: building the category salience and preference that made their performance channels work. The measurement model could not see it, so the investment looked wasteful.
Multi-touch attribution models attempt to address this by distributing credit across touchpoints, but they introduce their own distortions. Time-decay models favour recency. Linear models treat all touchpoints as equal. Data-driven models depend on the quality and completeness of the data feeding them, which is almost never as good as advertised, particularly in a post-cookie environment where cross-device and cross-channel tracking has become increasingly unreliable.
The honest position is that no attribution model gives you a complete picture. Each one is a perspective on reality, not reality itself. The goal is to use multiple lenses together and be clear-eyed about what each one can and cannot tell you.
Marketing Mix Modelling: What It Is and When to Use It
Marketing mix modelling, often called MMM, uses historical data to estimate the contribution of different marketing channels and activities to business outcomes. It is a statistical approach that controls for external factors like seasonality, economic conditions, and competitor activity, and attempts to isolate the effect of each marketing variable.
MMM has been around for decades. It fell out of fashion during the rise of digital attribution because digital promised something MMM could not: real-time, granular, apparently precise measurement. That promise has not aged well. As digital tracking has become less reliable and the limitations of attribution have become more apparent, MMM has seen a significant revival, particularly among larger advertisers who have the data volume and budget to run it properly.
The advantages of MMM are real. It is not dependent on cookies or user-level tracking. It captures the effect of offline channels alongside digital ones. It can model the long-term effects of brand investment, not just the immediate response. And it produces results at the business outcome level, not the platform metric level, which makes it far more useful for strategic decisions.
The limitations are also real. MMM requires substantial historical data, typically two or more years of weekly spend and sales data across all channels. It is expensive to build and maintain. And it is only as good as the assumptions built into the model, which means the output can be manipulated, consciously or otherwise, to tell a particular story. I have seen MMM outputs that were clearly designed to justify a budget allocation that had already been decided. The model was real. The objectivity was not.
Used honestly, MMM is one of the most valuable tools available for understanding marketing effectiveness at a portfolio level. Used as a post-hoc rationalisation exercise, it is expensive noise.
The Metrics That Actually Connect to Business Outcomes
One of the clearest signs that a marketing team is measuring the wrong things is when their metrics and the business’s financial performance move in opposite directions. Engagement is up, revenue is flat. Impressions are growing, market share is not. This disconnect is more common than anyone in marketing likes to acknowledge.
When I judged the Effie Awards, which recognise marketing effectiveness rather than creativity for its own sake, the entries that stood out were the ones where the marketing team had been ruthlessly clear about what business problem they were solving and how they would know if they had solved it. The metrics were commercial: revenue, volume, penetration, share. Not likes, not reach, not brand recall scores disconnected from any commercial context.
The metrics that connect to business outcomes tend to share a few characteristics. They are lagging indicators tied to actual transactions or customer behaviour, not leading indicators that predict future behaviour in theory but rarely do in practice. They are measured at the business level, not the channel level. And they are benchmarked against something meaningful, a prior period, a control group, a competitor, rather than reported in isolation.
Revenue is the obvious anchor. But revenue alone does not tell you enough. New customer acquisition, customer retention rates, average order value, and category penetration all provide a more complete picture of whether marketing is genuinely growing the business or simply servicing existing demand. Market penetration metrics are particularly underused in marketing measurement, despite being one of the clearest indicators of whether you are reaching genuinely new audiences or cycling through the same base.
Brand health metrics have a role too, but they need to be connected to commercial outcomes to be useful. Brand awareness in isolation tells you very little. Brand awareness among your target audience, tracked alongside purchase intent and conversion rates over time, tells you considerably more. The connection between brand and business has to be made explicit in your measurement framework, or brand investment will always lose the budget argument to performance channels that appear to deliver immediate returns.
Why Growth Requires Reaching New Audiences, Not Just Capturing Existing Intent
There is a version of marketing measurement that is technically sound and commercially useless. It accurately measures the return on capturing demand that already existed, optimises for efficiency within that demand pool, and reports excellent results right up until the growth stops.
This is the trap that performance-heavy measurement frameworks set. They are very good at measuring the bottom of the funnel and very bad at measuring what fills the top of it. If your measurement framework cannot see the effect of awareness and consideration activity, you will systematically underinvest in it. And if you underinvest in it long enough, the demand pool your performance channels are fishing from will shrink.
The Forrester intelligent growth model touches on this tension between short-term performance optimisation and the longer-cycle investments that sustain growth. The same tension appears in BCG’s work on brand and go-to-market strategy, which makes the case that brand investment and commercial performance are not competing priorities but complementary ones.
Measuring the effect of awareness investment is harder than measuring last-click conversions. That difficulty is real. But it is not a reason to ignore it. It is a reason to invest in better measurement tools and to be honest about the limitations of the ones you already have. The fact that something is hard to measure does not mean it is not working. It means your measurement framework has a gap.
When I grew an agency from twenty people to over a hundred and took it from loss-making to one of the top five in its sector, the turning point was not a better attribution model. It was a clearer understanding of which clients we were genuinely helping grow and which ones we were servicing without moving the needle. That distinction changed how we priced, how we staffed, and what we chose to measure. The measurement informed the strategy, not the other way around.
Building a Measurement Framework That Holds Up
A measurement framework that holds up is one that can survive scrutiny from someone who is not trying to make marketing look good. That sounds like a low bar. In practice, most frameworks would not clear it.
Start with the business question, not the available data. What does the business need marketing to achieve? New customer acquisition, retention, category growth, market share gain? Define that clearly before you decide what to measure. If you start with the data you have, you will build a framework that measures what is available rather than what matters.
Build in multiple measurement layers. No single method gives you the full picture. Use MMM for portfolio-level strategic decisions. Use geo-based holdout tests to validate the incrementality of major channel investments. Use attribution, with its limitations understood, for tactical optimisation within channels. Use brand tracking to monitor the health of your longer-term investment. Each layer answers a different question. None of them answers all of them.
Be explicit about what you cannot measure. Every framework has gaps. Acknowledging them is not a weakness. It is the difference between honest approximation and false precision. Marketing does not need perfect measurement. It needs honest measurement, and those are not the same thing.
Separate measurement from reporting. The measurement framework exists to improve decisions. The reporting framework exists to communicate performance. These have different audiences and different purposes, and conflating them is how measurement gets corrupted. When the primary audience for your measurement is the board or the client rather than the team making decisions, the framework will drift toward what looks good rather than what is true.
There is also a useful provocation in Vidyard’s piece on why go-to-market feels harder than it used to. Part of the answer is that the measurement environment has become more fragmented and less reliable, which makes it tempting to retreat to the metrics that are still easy to track, even when they are the least meaningful ones. Resist that temptation.
The Organisational Problem Nobody Talks About
Measurement frameworks fail for technical reasons, but they fail for organisational reasons more often. The technical problems are solvable. The organisational ones require something harder: a willingness to report accurately when the numbers are uncomfortable.
I have worked with companies where the marketing function was propping up a product or service that customers did not particularly want. The marketing was competent. The business problem was not a marketing problem. In those situations, better measurement does not help you. It just makes the underlying issue clearer, which is why it gets resisted.
If a company genuinely delighted its customers at every interaction, it would need less marketing, not more. The best marketing teams I have worked with understood that their job was to grow the business, not to defend the marketing budget. That distinction changes what you measure and how honestly you report it.
The measurement conversation also needs to happen at the right level. Channel managers should not be the ones deciding what constitutes marketing effectiveness for the business. That decision belongs at the commercial leadership level, where the connection between marketing investment and business outcomes is visible and where the incentives are aligned with growth rather than with any particular channel’s performance.
If you are working through how measurement should sit within a broader commercial and growth strategy, the Go-To-Market and Growth Strategy hub covers the strategic context in more depth, including how measurement decisions connect to channel strategy, audience planning, and commercial prioritisation.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
