ROI Reporting Is Lying to You. Here’s Why.
ROI reporting is the most trusted and most misleading artefact in marketing. Most reports are built to confirm decisions already made, credit channels that got lucky, and reassure stakeholders who were never going to read past the executive summary anyway. If your reporting tells a clean, coherent story every single month, that is not a sign of a well-run marketing operation. It is a sign that someone is smoothing the edges.
Good ROI reporting does not just measure what happened. It forces an honest conversation about what caused it, what you cannot attribute with confidence, and what the business should do differently as a result. Most marketing teams are not having that conversation.
Key Takeaways
- Most ROI reports are attribution narratives, not causal proof. The channel that gets the credit is rarely the one that did the work.
- Last-click and last-touch models systematically overvalue lower-funnel activity and undervalue the demand creation that made conversion possible in the first place.
- Honest reporting acknowledges what cannot be measured, not just what can. Confidence intervals matter more than clean numbers.
- Incrementality testing is the only reliable way to separate genuine ROI from coincidence. Most teams skip it because the results are uncomfortable.
- ROI reporting should inform budget decisions, not justify them. If your reports are written after the budget is set, they are not reports. They are receipts.
In This Article
- Why Most ROI Reports Are Attribution Theatre
- What ROI Reporting Is Actually Supposed to Do
- The Attribution Problem Is Not Going Away
- Incrementality: The Test Most Teams Avoid
- How to Build ROI Reporting That Is Actually Honest
- The Metrics That Actually Predict Growth
- The Conversation Your Reporting Should Be Starting
Why Most ROI Reports Are Attribution Theatre
Early in my career I was obsessed with lower-funnel performance. Conversion rates, cost per acquisition, return on ad spend. The numbers were clean, the story was tight, and the client loved the deck. What I did not appreciate at the time was that a significant portion of what we were crediting to paid search was going to happen anyway. Someone who has already decided to buy your product and types your brand name into Google is not being converted by your ad. They are being intercepted on the way to a decision they had already made.
It took me years and a lot of budget misallocation across multiple clients to really internalise that insight. The ad got the click. The brand, the product, the word of mouth, the awareness campaign from six months ago, those built the intent. Performance marketing captured it. That is not nothing, but it is not the whole story, and reporting that treats it as the whole story will consistently push budget toward channels that look efficient while starving the channels that are actually doing the heavy lifting.
This is not a new observation. The measurement community has been writing about attribution problems for years. But knowing the problem exists and actually building reporting that accounts for it are two very different things. Most teams default to whatever their analytics platform reports by default, wrap it in a slide deck, and call it ROI reporting. That is not reporting. That is data presentation with a story bolted on.
If you want a broader view of how measurement fits into commercial strategy, the Go-To-Market and Growth Strategy hub covers the full picture, from audience targeting to channel planning to how you structure your reporting to actually inform decisions.
What ROI Reporting Is Actually Supposed to Do
Before you fix your reporting, it helps to be clear about what it is supposed to achieve. ROI reporting has three legitimate purposes. First, it tells you whether your marketing spend is generating returns above the cost of capital, in plain terms, whether it is worth doing. Second, it helps you allocate future budget more intelligently by identifying what is working and what is not. Third, it builds internal confidence in the marketing function by demonstrating commercial accountability.
Notice what is not on that list. ROI reporting is not supposed to justify decisions already made. It is not supposed to make the marketing team look good. It is not supposed to paper over a bad quarter with a metric that happened to move in the right direction. I have sat in enough agency reviews and client board meetings to know that these are the actual purposes most reports serve in practice. The language of accountability is used to perform accountability rather than to achieve it.
When I was running the agency at iProspect, we grew from around 20 people to over 100 and moved from the bottom of the market to a top-five position in the UK. A big part of that was being willing to tell clients things their previous agency had not told them, including that some of what they were measuring was not measuring what they thought it was. That honesty was uncomfortable in the short term and commercially valuable in the long term. Clients who trusted our analysis gave us more budget to work with, because they knew the numbers were real.
The Attribution Problem Is Not Going Away
Attribution is the central problem in ROI reporting, and it is genuinely hard. A customer might see a display ad on Monday, read a blog post on Wednesday, click a retargeting ad on Friday, and convert via a branded search on Saturday. Which channel gets the credit? Last-click says search. First-click says display. Linear splits it four ways. Data-driven attribution runs a model that most marketers cannot interrogate and produces numbers that feel authoritative but are based on assumptions the platform vendor does not fully disclose.
None of these models are correct. They are all approximations, and the question is not which one is right but which one is least wrong for your specific situation and least likely to produce decisions you will regret. The honest answer for most businesses is that you cannot fully attribute revenue to individual channels, and any report that claims to do so with precision is overstating what the data can support.
This does not mean measurement is hopeless. It means you need to be explicit about the limitations of your model. When I was judging the Effie Awards, one of the things that distinguished the strongest entries was not that they had perfect measurement. It was that they were honest about what they could and could not prove, and they built their case around a coherent body of evidence rather than a single metric. The teams that submitted entries built around one attribution model and called it proof rarely made it through.
The challenge of attribution is not unique to any one sector. Forrester’s work on go-to-market struggles in healthcare highlights how complex buying journeys make it structurally difficult to assign credit to any single touchpoint, and that complexity is present in almost every B2B and considered-purchase B2C category.
Incrementality: The Test Most Teams Avoid
The most reliable way to measure true ROI is incrementality testing. The principle is straightforward: you withhold marketing activity from a control group and measure the difference in outcomes between that group and the exposed group. The difference is the incremental effect of your marketing. Everything else, the baseline conversions that would have happened anyway, is stripped out.
In practice, most teams do not run incrementality tests regularly, because the results are uncomfortable. When you run a proper geo holdout test on your paid search brand campaigns, you will almost always find that a meaningful share of those conversions would have happened without the ad. That is not a reason to stop running brand search. There are good reasons to maintain brand coverage. But it does mean the ROI number you have been reporting is inflated, and it means some of the budget you have been allocating based on that number should probably be going somewhere else.
I have had this conversation with clients many times. The reaction is usually a mix of scepticism and mild defensiveness, because the implication is that money has been spent in a way that was not fully justified by the evidence. That is a difficult thing to say and a more difficult thing to hear. But it is the conversation that leads to better decisions, and better decisions compound over time in ways that comfortable reporting never does.
The growth literature is full of examples of teams that optimised aggressively within a channel and hit a ceiling, because they were capturing existing demand efficiently rather than expanding the pool of potential customers. Incrementality testing is one of the tools that helps you see that ceiling before you hit it.
How to Build ROI Reporting That Is Actually Honest
Honest ROI reporting does not require perfect data. It requires a clear framework, explicit assumptions, and the discipline to report what the data actually says rather than what you wish it said. Here is how I approach it.
Start with business outcomes, not marketing metrics. Revenue, margin, customer acquisition cost, lifetime value. These are the numbers that matter to the business. Marketing metrics like impressions, clicks, and engagement rates are useful diagnostics, but they are not the outcome. A report that leads with click-through rates and buries revenue contribution is a report designed to impress, not to inform.
Be explicit about your attribution model and its limitations. Do not just report the number. Report the model that produced it and the assumptions it relies on. If you are using last-click, say so and acknowledge that this model overstates lower-funnel contribution. If you are using a platform’s data-driven model, note that it is a black box and that you are treating it as a directional indicator rather than a definitive answer.
Separate what you know from what you are inferring. Some things you can measure directly: email click rates, paid search conversion rates, revenue from a specific promotional code. Other things you are inferring from correlation: the relationship between brand awareness and long-term revenue growth, the contribution of organic content to pipeline. Report these differently. Conflating measured outcomes with inferred ones is where most reporting goes wrong.
Include a view of what you cannot measure. This sounds counterintuitive, but explicitly acknowledging the gaps in your measurement builds credibility rather than undermining it. If you ran a campaign that you believe contributed to brand consideration but cannot prove it in the data, say that. Explain what you would need to measure it properly and whether that investment is justified. Stakeholders who trust your reporting are stakeholders who give you the latitude to invest in channels that do not have a clean attribution story.
Understanding how reporting connects to broader commercial decision-making is something the BCG work on commercial transformation addresses well. The point is not just to measure better but to build the organisational capability to act on what measurement reveals.
The Metrics That Actually Predict Growth
There is a version of ROI reporting that is technically accurate and commercially useless. It tells you exactly what happened last month, with great precision, and gives you no useful information about what will happen next month or next year. This is the reporting equivalent of driving by looking in the rear-view mirror.
The metrics that actually predict growth tend to be leading indicators: new customer acquisition rate, share of wallet among existing customers, brand consideration among your target audience, net promoter score trends, pipeline velocity. These metrics are harder to measure than conversion rates and harder to tie directly to spend. But they are the metrics that tell you whether your marketing is building something durable or just optimising within a shrinking pool of existing intent.
Think about it this way. A clothing retailer knows that someone who tries on a garment is dramatically more likely to buy it than someone who walks past it on the rail. The in-store experience, the fit, the feel, the moment of consideration, those are the things that move people from awareness to purchase. If you only measured sales at the till, you would have no idea whether your changing rooms were too cramped, your staff too absent, or your sizing too inconsistent. You would just see the conversion rate and conclude that everything was fine until it was not.
Marketing works the same way. The metrics closest to conversion are the easiest to measure and the least useful for understanding what is actually driving growth. The metrics that tell you whether you are building a bigger pool of potential customers, whether your brand is becoming more or less salient, whether your content is genuinely influencing consideration, those are harder to capture but more commercially important.
The Vidyard analysis of why go-to-market feels harder points to something relevant here: buyer journeys are longer and more diffuse than they used to be, which means the gap between marketing activity and measurable outcome has widened. Reporting that does not account for that gap will consistently undervalue the work that creates demand and overvalue the work that captures it.
The Conversation Your Reporting Should Be Starting
The best ROI report I ever saw was two pages long. It showed revenue generated, cost of marketing activity, and a clear statement of what could be attributed with confidence versus what was inferred. It included a section on what the team did not know and what they planned to do to find out. It made three specific recommendations for budget reallocation with a rationale for each. And it was honest about one campaign that had underperformed and what the team had learned from it.
That report started a real conversation. The business made better decisions as a result of it. Compare that to the forty-slide decks full of green arrows and percentage uplifts that most marketing teams produce every month. Those decks end conversations. They are designed to close down scrutiny, not invite it.
If your ROI reporting is not generating genuine questions from your stakeholders, that is a problem. It means either the report is not being read, or it is being read and no one believes it, or it is so smooth and reassuring that it has been accepted as performance theatre rather than real accountability. None of those outcomes serve the business.
The BCG research on evolving go-to-market strategy makes the point that commercial transformation requires building trust between marketing and the broader business. That trust is built through honest reporting, not impressive reporting. There is a difference, and most teams have confused the two.
ROI reporting is one part of a broader set of decisions about how you structure your marketing for growth. If you want to see how measurement, channel strategy, and audience planning connect into a coherent commercial approach, the Go-To-Market and Growth Strategy hub is the place to start.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
