Marketing ROI: Stop Measuring Activity and Start Measuring Outcomes
Demonstrating ROI for marketing spend is not a measurement problem. It is a framing problem. Most marketing teams are measuring the wrong things with great precision, then wondering why the CFO still treats the budget as discretionary.
The fix is not a better dashboard. It is a cleaner connection between what marketing does and what the business actually cares about, built before the campaign launches, not retrofitted after it ends.
Key Takeaways
- Most marketing teams measure outputs (impressions, clicks, sessions) when the business only cares about outcomes (revenue, pipeline, retention). Closing that gap is the real ROI challenge.
- Attribution models are approximations, not facts. Treating them as facts destroys credibility with finance faster than any budget overrun.
- The most common source of poor marketing ROI is not bad execution. It is misaligned briefs that fund activity disconnected from commercial goals.
- A simple contribution framework, agreed with finance before the year starts, is worth more than any post-campaign analytics report.
- Marketing does not need perfect measurement. It needs honest approximation, communicated with commercial confidence.
In This Article
- Why the ROI Conversation Keeps Going Wrong
- The Measurement Trap: Precision Without Meaning
- What Finance Actually Wants From Marketing
- The Real Source of Poor Marketing ROI
- Building a Contribution Framework That Finance Will Accept
- Attribution: What to Use, What to Ignore, and What to Admit You Don’t Know
- The Experiments That Actually Prove Value
- How to Present Marketing ROI Without Losing the Room
- The Long Game: Building Measurement Credibility Over Time
Why the ROI Conversation Keeps Going Wrong
I have sat in more budget review meetings than I can count, on both sides of the table. As an agency CEO presenting to clients, and as a marketing operator defending spend to boards. The dynamic is almost always the same. Marketing arrives with a deck full of metrics. Finance arrives with a question about revenue. Neither side is speaking the other’s language, and the meeting ends with a budget cut dressed up as a “strategic realignment.”
The root cause is not a lack of data. Most marketing teams are swimming in data. The problem is that the data being reported does not connect to the numbers that finance uses to evaluate business performance. Impressions are not revenue. Share of voice is not margin. A cost-per-click that improved 18% quarter-on-quarter is irrelevant if pipeline is flat.
When I was running iProspect and we were growing the team from around 20 people toward 100, one of the things that changed our client relationships most significantly was shifting how we reported. We stopped leading with channel metrics and started leading with commercial contribution. It changed the conversations immediately. Clients stopped questioning budget and started asking how to increase it.
The Measurement Trap: Precision Without Meaning
There is a particular kind of marketing report that looks authoritative and means almost nothing. It has a lot of numbers, several charts with upward trends, and a summary that says performance was “strong across key metrics.” Finance reads it and immediately asks: so what did it do for the business?
The industry has developed an impressive infrastructure for measuring activity. We can track clicks, attribute conversions across channels, model customer journeys, and produce multi-touch attribution reports that would have seemed like science fiction twenty years ago. And yet the fundamental question, did this marketing investment generate a return worth the cost, remains stubbornly difficult to answer with confidence.
Part of the problem is that the tools we use to measure marketing are measuring a model of reality, not reality itself. Attribution platforms assign credit based on rules or algorithms. Those rules are imperfect. The last click did not necessarily cause the conversion. The first touch did not necessarily create awareness. The customer’s actual decision-making process is more complicated than any attribution model can capture, and pretending otherwise is where marketing loses credibility with finance.
I have seen this play out in detail when judging the Effie Awards. The entries that struggled to demonstrate effectiveness were rarely the ones with bad campaigns. They were the ones that confused measurement with proof. They had data, but the data did not tell a coherent commercial story. The entries that won had made a deliberate choice: honest approximation over false precision.
There is useful thinking from Forrester on how marketing silos distort measurement. When teams optimise for their own channel metrics in isolation, the aggregate picture looks good even when the business outcome is poor. Breaking down those silos is as much a measurement discipline as it is an organisational one.
What Finance Actually Wants From Marketing
Finance does not want a perfect attribution model. Finance wants to know that the money is not being wasted. Those are different problems, and conflating them leads marketing teams to spend enormous energy on measurement infrastructure when the real issue is a confidence gap.
What builds confidence with finance is not precision. It is consistency, transparency, and commercial logic. A CFO who trusts that marketing understands the business, can connect spend to outcomes in a plausible way, and is honest about what can and cannot be measured, will defend the marketing budget. A CFO who receives a quarterly deck of channel metrics they do not understand will cut it at the first sign of pressure.
The practical implication is that the ROI conversation needs to start before the money is spent, not after. This means agreeing with finance on what success looks like at the beginning of the planning cycle. What commercial outcomes is marketing expected to contribute to? What is a reasonable timeframe? What will be measured, and how? When those agreements are in place, the post-campaign conversation is a review of a shared framework, not a defence of a marketing team’s choices.
This is the discipline that most marketing teams skip. They plan campaigns, execute them, then measure what they can and present it. The problem is that by the time the results are in, finance has already formed a view, and it is usually shaped by whether revenue targets were hit, not by whether the marketing metrics looked good.
The Real Source of Poor Marketing ROI
I have managed hundreds of millions in ad spend across more than thirty industries. The most consistent source of poor return I have seen is not bad execution. It is misaligned briefs. Campaigns that were planned and executed competently, but against an objective that was never connected to a commercial goal.
There is a parallel here to a conversation the industry has been having about sustainability. There is a lot of energy going into measuring the carbon impact of ad serving, which is a real issue but a relatively small one compared to the strategic waste embedded in how marketing is planned. Bad briefs, campaigns funded because of internal politics rather than commercial logic, spend allocated to channels because “we’ve always done it” rather than because there is evidence it works. The industry focuses on the visible waste while the invisible waste, the structural kind, goes largely unexamined.
The brief is where ROI is made or lost. A brief that starts with a commercial question, what does the business need this marketing to do, and works backward to channel and creative, will almost always outperform a brief that starts with a channel or a format and works forward to a business rationale. This sounds obvious. It is not how most briefs are written.
When marketing and sales are aligned around shared commercial goals, this problem is much easier to manage. The Sales Enablement and Alignment hub covers this relationship in depth, because the brief quality problem and the ROI problem are often the same problem seen from different angles. Marketing that is planned in isolation from sales tends to optimise for the wrong things.
Building a Contribution Framework That Finance Will Accept
A contribution framework is not a measurement system. It is a shared agreement about how marketing’s role in commercial outcomes will be described, estimated, and evaluated. It does not need to be perfect. It needs to be credible, consistent, and agreed in advance.
The simplest version has four components. First, the commercial goal: what business outcome is this investment expected to contribute to, expressed in terms finance recognises (revenue, pipeline, retention, market share). Second, the marketing mechanism: what will marketing actually do, and why is that expected to influence the commercial goal. Third, the leading indicators: what will be measured during the campaign that provides early evidence the mechanism is working. Fourth, the lagging indicators: what will be measured after the campaign to assess commercial contribution.
The key word in that last sentence is “contribution.” Marketing rarely causes revenue in isolation. A customer buys because of a combination of product quality, sales effectiveness, pricing, timing, competitive context, and marketing influence. Claiming full credit for a sale is as dishonest as claiming no credit. The contribution framework acknowledges this complexity and estimates marketing’s share of it, rather than either overstating or understating the case.
When I was working on turnaround situations at agency level, the businesses that were losing money were almost always doing so because they had no contribution framework. They were spending on marketing because they felt they should, measuring activity because it was easy, and presenting results that looked fine until the P&L didn’t. Getting those businesses to profitability started with agreeing what marketing was supposed to do for the business, and then only funding the marketing that could plausibly do it.
Attribution: What to Use, What to Ignore, and What to Admit You Don’t Know
Attribution is a tool, not a truth. The moment you treat an attribution model as a fact rather than an approximation, you have lost the plot, and you will eventually lose the argument with finance when the model’s limitations become apparent.
Last-click attribution is still widely used and widely misleading. It assigns all credit to the final touchpoint before conversion, which systematically undervalues brand activity, content marketing, and any channel that operates earlier in the customer experience. If you are using last-click to evaluate your full marketing mix, you are making resource allocation decisions based on a distorted picture.
Data-driven attribution models are better, but they require volume to be reliable and they are still working from a model of customer behaviour, not a direct observation of it. Moz’s thinking on feedback loops in measurement is relevant here: the way you measure shapes what you optimise for, which shapes what you measure. If your attribution model rewards paid search, you will invest more in paid search, which will show up as a stronger signal in your attribution model, which will justify more investment. The feedback loop can be circular rather than informative.
The most honest approach is to use attribution as one input among several, to be transparent about its limitations, and to triangulate it with other evidence: sales team feedback, customer surveys, controlled experiments where budget allows, and the kind of qualitative intelligence that does not show up in dashboards but is often the most reliable signal you have. Understanding the competitive context is part of this: if your metrics look flat but a competitor just ran a major campaign, that is relevant information that no attribution model will surface.
The Experiments That Actually Prove Value
If you want to demonstrate marketing ROI with genuine confidence, the most reliable method is a controlled experiment. Hold out a market, a segment, or a time period from marketing activity, and compare the commercial outcomes against the exposed group. The difference is your best estimate of marketing’s contribution.
This is harder to run than it sounds. You need clean separation between test and control groups, enough volume for statistical significance, and a business that is willing to accept short-term underperformance in the holdout group for the sake of long-term measurement clarity. Most businesses are not willing to do this consistently, which is why most marketing ROI claims are estimates rather than evidence.
But even imperfect experiments are more valuable than no experiments. A geo-based holdout, where you run activity in some regions and not others, gives you directional evidence that is more credible than any attribution model. A channel pause test, where you turn off a channel for a defined period and measure the impact, tells you more about that channel’s real contribution than months of attribution data.
The willingness to run these tests is itself a signal to finance. It says that marketing is confident enough in its contribution to put it to a test, rather than hiding behind the complexity of multi-touch attribution. That confidence, demonstrated through action rather than assertion, changes the dynamic of the budget conversation.
BCG’s work on tracking consumer behaviour through periods of economic pressure is a useful reference point here. The businesses that maintained measurement discipline during difficult periods were better positioned to make the case for marketing investment when conditions improved. The discipline is not just about proving ROI in the moment. It is about building the institutional knowledge that makes future investment decisions more defensible.
How to Present Marketing ROI Without Losing the Room
The presentation of ROI matters as much as the measurement of it. A technically sound analysis presented in marketing language will not land with a finance audience. A simpler analysis presented in commercial language will.
The structure that works is: what we set out to do, what we did, what happened, what we think marketing contributed, and what we would do differently. That last element is often missing from marketing ROI presentations, and its absence undermines credibility. If marketing never acknowledges that anything could have been done better, finance will assume the team is not being honest about the full picture.
Quantify where you can. Estimate where you cannot, and say so. Do not present an estimate as a fact. Do not present a correlation as a cause. Do not claim credit for outcomes that would have happened without marketing. These are the errors that erode trust over time, and once that trust is gone, the budget conversation becomes adversarial rather than collaborative.
One thing I have found consistently useful is to include a “what we do not know” section in ROI presentations. It sounds counterintuitive. But acknowledging the limits of your measurement, clearly and without apology, signals analytical rigour rather than weakness. Finance professionals deal with uncertainty constantly. They respect people who are honest about it.
If you are building the broader case for how marketing and sales work together to drive commercial outcomes, the resources in the Sales Enablement and Alignment section provide the operational context that sits behind these measurement conversations. ROI is not just a reporting problem. It is a planning and alignment problem, and the two are inseparable.
The Long Game: Building Measurement Credibility Over Time
The most effective marketing ROI case is not made in a single presentation. It is built over several years of consistent, honest reporting that connects marketing activity to commercial outcomes in a way that holds up under scrutiny.
Early in my career, I learned something that has stayed with me. When I was in my first marketing role around 2000, I wanted to build a new website and the MD said no. Rather than accepting that, I taught myself to code and built it. The lesson was not about resourcefulness, though that was part of it. The lesson was that the people holding the budget needed to see evidence before they would release it, and the only way to get that evidence was to do something demonstrable first. Marketing ROI works the same way. You build credibility by demonstrating it in small ways before you can claim it in large ones.
Start with the metrics that are easiest to connect to commercial outcomes, even if they are imperfect. Build a track record of honest reporting. Introduce experiments gradually. Develop the contribution framework in collaboration with finance rather than presenting it to them. Over time, the conversation shifts from “prove that marketing works” to “how do we get more from what’s working.”
That shift is worth more than any measurement tool. It is the difference between a marketing budget that is defended by the CFO and one that is cut at the first sign of pressure.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
