Marketing ROI Measurement: Stop Reporting Numbers, Start Measuring Impact
Measuring marketing ROI means connecting marketing activity to business outcomes in a way that is honest, consistent, and commercially useful. It does not mean producing a dashboard full of metrics that look impressive in a slide deck but tell you nothing about whether the business is better off for having spent the money.
Most teams are doing the first thing while calling it the second. The gap between those two positions is where most marketing budgets quietly disappear.
Key Takeaways
- Marketing ROI measurement fails most often not because of bad tools, but because teams measure activity rather than business outcomes.
- The formula for ROI is simple. The hard part is defining what counts as a return and being honest when the number is low or negative.
- Most attribution models are assumptions dressed up as data. Treating them as gospel produces confident decisions based on fiction.
- An honest approximation of marketing’s contribution is more useful than a precise-looking number built on flawed inputs.
- Fixing measurement does not just improve reporting. It exposes which activities are working and which have been consuming budget on faith.
In This Article
- Why Most Marketing ROI Reporting Is Measuring the Wrong Thing
- The ROI Formula Is Simple. The Inputs Are Not.
- What Should Count as the Return?
- Attribution: The Part Where Most Measurement Falls Apart
- Incrementality: The Measurement Most Teams Skip
- Building a Measurement Framework That Holds Up to Finance
- The Honest Approximation Problem
- What Good ROI Measurement Changes
Why Most Marketing ROI Reporting Is Measuring the Wrong Thing
I spent years sitting in agency reviews where the numbers always looked good. Click-through rates were up. Impressions were growing. Cost per click was down. The client nodded along. Then, at the end of the quarter, someone from finance would ask whether revenue had moved, and the room would go quiet.
That pattern is not unusual. It is the default state of marketing measurement in most organisations. Teams optimise for the metrics they can control and report, rather than the outcomes the business actually cares about. The result is a measurement system that works perfectly at measuring the wrong things.
Activity metrics, reach, engagement, traffic, impressions, are not ROI. They are inputs to ROI at best, and vanity metrics at worst. ROI requires a numerator (the return) and a denominator (the investment). If you cannot clearly define both, you are not measuring ROI. You are reporting activity and calling it performance.
The Forrester piece on measurement undermining the buyer experience makes a point that stuck with me: the way teams measure marketing can actively distort the decisions they make about it. When you optimise for what is measurable rather than what matters, you end up with a marketing programme that scores well on its own terms but underperforms on the business’s terms.
The ROI Formula Is Simple. The Inputs Are Not.
The formula itself is not complicated. Marketing ROI equals the net return from marketing divided by the cost of marketing, expressed as a percentage. If you spent £100,000 on a campaign and it generated £400,000 in revenue attributable to that campaign, your gross return is £300,000 and your ROI is 300%.
The problem is every word in that sentence contains a decision. What counts as a cost? What counts as attributable revenue? Over what time period? Against what baseline? These are not technical questions. They are commercial and philosophical ones, and most teams either avoid them or answer them in ways that make the numbers look better than they are.
When I was running the performance division at iProspect, we grew from around 20 people to over 100 across a few years. One of the things that changed as we scaled was how seriously we took the definition of “return.” Early on, like most agencies, we reported what the client’s analytics platform said. Conversions attributed, revenue tracked, cost per acquisition calculated. It looked rigorous. But when we started stress-testing those numbers against actual business data, the gaps were significant. The analytics platform was telling one story. The P&L was telling another.
Getting those two stories closer together is what real ROI measurement looks like. It is uncomfortable work, because it often reveals that the marketing was less effective than the dashboard suggested.
What Should Count as the Return?
This depends entirely on what the marketing was supposed to do. That sounds obvious, but it is routinely ignored. Teams launch campaigns without a clear financial objective, then scramble to find a metric that makes the spend look justified after the fact.
For direct response activity, the return is usually revenue or leads with a known conversion value. For brand campaigns, it is more complex and requires a different measurement approach, typically involving brand tracking, sales uplift studies, or econometric modelling. For retention-focused activity, it might be measured in reduced churn or increased lifetime value.
The mistake is applying a single measurement framework across all types of activity. A brand awareness campaign measured purely on last-click revenue will always look terrible. A performance campaign measured on brand recall will always look irrelevant. The measurement model has to match the marketing objective, and the marketing objective has to be defined in terms the business finance team would recognise.
If you are building or refining your measurement approach, the broader context around marketing analytics and how to use data to make better commercial decisions is worth working through systematically rather than jumping straight to tool selection.
Attribution: The Part Where Most Measurement Falls Apart
Attribution is the process of deciding which marketing touchpoints get credit for a conversion. It sounds like a technical problem. It is actually a philosophical one, and the industry has been papering over that fact for years.
Last-click attribution gives 100% of the credit to the final touchpoint before a conversion. It is simple, widely used, and systematically wrong. It overvalues bottom-of-funnel activity (branded search, retargeting) and undervalues everything that created the demand in the first place. If you optimise based on last-click data, you will gradually defund the channels that are doing the most work and over-invest in the ones that are simply collecting the credit.
First-click has the opposite problem. Linear attribution spreads credit equally across all touchpoints, which sounds fair but is also a fiction. Time-decay models assume recency equals importance, which is sometimes true and often not. Data-driven attribution models, the ones that claim to use machine learning to assign credit based on actual conversion paths, are better in theory but require large data volumes to be reliable and are still making assumptions about causation that the data cannot actually prove.
I judged the Effie Awards, which are specifically focused on marketing effectiveness. The entries that consistently stood out were not the ones with the most sophisticated attribution models. They were the ones where the team had been honest about what they could and could not measure, had used multiple measurement approaches in parallel, and had triangulated toward a conclusion rather than treating a single model as definitive truth.
That triangulation approach is worth borrowing. Use your attribution model as one input, not as the answer. Pair it with incrementality testing where you can, with brand tracking where it is relevant, and with direct business outcome data wherever possible.
Incrementality: The Measurement Most Teams Skip
Incrementality testing asks a specific question: what would have happened without this marketing activity? It is the closest thing to a controlled experiment that most marketing teams can run, and it is significantly underused.
The basic approach involves splitting your audience into an exposed group and a holdout group, running your campaign to the exposed group, and comparing the outcomes. The difference in conversion rate or revenue between the two groups is your incremental lift. That lift, multiplied by the size of your addressable audience, gives you a much more honest picture of what the marketing actually caused, rather than what it was present for.
This matters because a significant portion of conversions attributed to marketing would have happened anyway. Someone who was already planning to buy your product searches for your brand name, clicks a paid search ad, and converts. The platform reports a conversion. The campaign gets the credit. But the sale was not caused by the ad. It was captured by it. Those are very different things, and conflating them leads to overinvestment in channels that are harvesting existing demand rather than generating new demand.
Incrementality testing is not always easy to run. It requires volume, patience, and a willingness to accept results that might be inconvenient. But it is the only way to get close to an honest answer about what marketing is actually worth.
Building a Measurement Framework That Holds Up to Finance
One test I have always found useful: could you present your marketing ROI calculation to your CFO and have them accept the methodology? Not the number, the methodology. If the answer is no, the measurement is probably not rigorous enough to base decisions on.
A defensible measurement framework has a few consistent features. It starts with a clear objective expressed in financial terms, revenue, profit contribution, customer acquisition cost, lifetime value, or churn reduction. It defines the measurement period upfront, because marketing effects often lag spend by weeks or months, and short measurement windows systematically undervalue brand and upper-funnel activity. It accounts for the full cost of marketing, including agency fees, technology, internal resource, and production, not just media spend. And it uses more than one measurement approach, treating the outputs as inputs to a judgment rather than as definitive answers.
The MarketingProfs framework for building a marketing dashboard is a useful starting point for structuring what you report. The underlying principle, that a dashboard should answer specific business questions rather than display every available metric, applies directly to ROI measurement.
On the tools side, GA4 has changed how many teams approach measurement, particularly for digital activity. Moz’s analysis of how GA4 data can reshape content strategy illustrates how the platform’s event-based model opens up measurement possibilities that were harder to access in Universal Analytics, though it also requires more deliberate setup to get right. If your GA4 implementation was not planned with clear business objectives in mind, the data it produces will reflect that.
Email is one channel where ROI measurement is more tractable than most, because the data is relatively clean and the attribution chain is shorter. HubSpot’s guide to email marketing reporting covers the metrics worth tracking and, more usefully, how to connect them to revenue rather than treating open rates as an end in themselves.
The Honest Approximation Problem
There is a version of this conversation that ends with a neat framework, a tidy formula, and a promise that if you follow the steps you will know exactly what your marketing is worth. That version is not honest.
Marketing ROI, in most real-world situations, cannot be measured with precision. There are too many variables, too many unmeasured touchpoints, too many external factors affecting business performance. The goal is not precision. It is an honest approximation, presented as an approximation rather than as a definitive answer.
I have seen teams do significant damage by treating imprecise ROI numbers as precise ones. They defund channels that look unprofitable on a flawed attribution model. They scale channels that look highly profitable because they are good at claiming credit. They optimise toward metrics that are easy to measure rather than outcomes that are hard to measure but actually matter. The measurement system ends up shaping the marketing strategy, and not in a good way.
The alternative is not to abandon measurement. It is to hold it more honestly. To say “based on our best available data, we believe this activity contributed approximately X to revenue, with the following caveats” rather than presenting a number with false confidence. That kind of honesty is uncomfortable in a boardroom, but it is the foundation of measurement that actually improves over time.
The MarketingProfs piece on web analytics preparation makes a point that has aged well: the failure mode in analytics is usually not technical. It is the absence of a clear plan for what you are trying to measure and why. That is as true for ROI measurement as it is for any other analytics discipline.
What Good ROI Measurement Changes
When I was brought in to turn around a loss-making agency, one of the first things I did was look at how the business measured the impact of its own marketing. What I found was a familiar pattern: lots of activity, reasonable-looking metrics, no clear line to revenue. The agency was spending money on marketing that had never been properly evaluated and had been renewed on inertia rather than evidence.
Fixing the measurement did not just improve the reporting. It changed the decisions. Channels that looked productive on surface metrics turned out to be producing very little when you traced the path to actual revenue. Channels that had been underinvested in because they were harder to measure turned out to be doing more work than the data suggested. The budget shifted. The results improved. Not because we had found a magic channel, but because we had stopped funding things on faith and started making decisions on evidence.
That is what good ROI measurement is actually for. Not to produce a number for a slide deck. To change decisions. If your measurement system is not changing how you allocate budget, it is probably not measuring the right things.
For content-heavy programmes, Unbounce’s breakdown of content marketing metrics is worth reading with a critical eye. The useful metrics are the ones that connect to pipeline or revenue. The rest are inputs to a story, not the story itself.
And if you are working on simplifying how your team thinks about analytics more broadly, this Unbounce piece on making marketing analytics simple is a useful counterweight to the tendency to add complexity when the measurement is not producing the answers you want.
If you are building out a more complete approach to measurement across channels, the Marketing Analytics hub covers the broader landscape, from attribution and GA4 to competitive data and performance frameworks, in a way that is designed to be practically useful rather than theoretically complete.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
