ROI Reports Are Lying to You. Here’s Why.
An ROI report is a summary of marketing return on investment, typically showing revenue or value generated relative to spend across channels, campaigns, or time periods. Most of them are wrong, not because the data is fabricated, but because the logic connecting activity to outcome is built on assumptions nobody has bothered to test.
I’ve reviewed hundreds of these reports across 30 industries. The numbers are usually clean. The narrative is usually confident. And the conclusions are usually optimistic in ways that serve the person presenting more than the business receiving.
Key Takeaways
- Most ROI reports measure what is easy to attribute, not what actually drove growth, creating a systematic bias toward lower-funnel channels.
- Incrementality is the only honest test of marketing ROI: would this revenue have happened without the activity? Most reports never ask this question.
- Performance marketing often claims credit for demand it did not create, capturing intent that brand and awareness activity generated upstream.
- A good ROI report separates correlation from contribution, acknowledges what cannot be measured, and builds in honest approximation rather than false precision.
- The goal of an ROI report is not to defend the budget. It is to help the business allocate the next one better.
In This Article
- Why Most ROI Reports Are Built to Confirm, Not to Inform
- What Incrementality Actually Means and Why It Changes Everything
- The Attribution Models That Flatter the Wrong Channels
- How to Build an ROI Report That Is Actually Useful
- The Reporting Behaviours That Erode Trust Over Time
- What a Good ROI Report Actually Looks Like
- The Relationship Between ROI Reporting and Budget Confidence
- The Effie Lesson: Effectiveness Is Not the Same as Efficiency
Why Most ROI Reports Are Built to Confirm, Not to Inform
Early in my career, I was as guilty of this as anyone. We were running performance campaigns for a retail client, and the last-click attribution model was painting a very flattering picture of paid search. ROAS was strong. The client was happy. The report was a neat story of cause and effect.
What we were not asking was whether any of that revenue would have happened anyway. The customers clicking on branded search terms already knew the brand. They had probably seen a TV spot, walked past a store, or been referred by a friend. Paid search was the last door they walked through, not the reason they came to the building.
This is the core problem with most ROI reporting. It conflates the last measurable touchpoint with the cause of the sale. And because digital channels are easier to track than TV, outdoor, or word of mouth, they consistently absorb credit that belongs elsewhere.
The result is a reporting culture that systematically undervalues brand, awareness, and upper-funnel activity, and systematically overvalues the channels that sit closest to conversion. Over time, this distorts budget allocation in ways that damage long-term growth even while short-term numbers look fine.
If you are thinking about how ROI reporting fits into a broader growth strategy, the wider context around go-to-market and growth strategy matters here. Attribution is not just a measurement problem. It shapes which activities get funded and which get cut, which means it shapes the trajectory of the business.
What Incrementality Actually Means and Why It Changes Everything
Incrementality asks one question: would this outcome have happened without this activity? It sounds simple. It is operationally inconvenient, which is probably why most businesses avoid it.
The cleanest way to test incrementality is a holdout experiment. You run a campaign to one group and withhold it from a statistically comparable group, then measure the difference in outcomes. What you are left with is the genuine contribution of the activity, stripped of the noise from people who were going to convert regardless.
When businesses run these tests for the first time, the results are often uncomfortable. Retargeting campaigns that looked like high-ROAS gold frequently show weak incrementality, because most of the people in the retargeting pool were already going to buy. You are not persuading them. You are just paying to show them an ad on the way to a decision they had already made.
I have seen this play out repeatedly. A client running heavy retargeting spend across a fashion e-commerce brand was reporting a ROAS north of 8x. We ran a holdout test over six weeks. The incremental ROAS was closer to 2.5x. The gap was not fraud or misreporting. It was the difference between attributed revenue and caused revenue. They were counting customers who would have arrived anyway.
Think of it like a clothes shop. A customer who tries something on is far more likely to buy than one who is just browsing. But the fitting room did not create the desire to buy. The display, the brand reputation, the recommendation from a friend, the window display, those are what brought the customer in. If you only measure the fitting room, you will keep investing in fitting rooms and wonder why footfall is declining.
The Attribution Models That Flatter the Wrong Channels
Last-click attribution is the most widely used model in digital marketing. It is also one of the least useful for understanding what actually drove growth. It rewards the channel that happened to be present at the moment of conversion, regardless of what came before it.
First-click attribution has the opposite problem. It over-credits the initial touchpoint and ignores everything that happened between awareness and purchase.
Linear attribution distributes credit equally across all touchpoints, which sounds fair but treats a display impression and a product page visit as equivalent contributions. They are not.
Data-driven attribution, which the major ad platforms now default to, uses machine learning to assign fractional credit across touchpoints based on historical patterns. It is better than last-click. It is still operating within the closed ecosystem of the platform reporting it, which means it cannot see what happened off-platform, and it has a structural incentive to show that the platform’s own channels contributed more than they did.
None of these models answer the incrementality question. They describe how credit was distributed across a customer experience. They do not tell you how much of that experience the marketing actually caused.
For a broader perspective on how go-to-market thinking intersects with commercial growth, BCG’s commercial transformation framework makes a useful distinction between revenue captured from existing demand and revenue generated through new market activity. It is the same tension that sits at the heart of every ROI report.
How to Build an ROI Report That Is Actually Useful
A useful ROI report does four things. It separates correlation from contribution. It acknowledges what cannot be measured. It builds in honest approximation rather than false precision. And it connects marketing activity to business outcomes, not just media metrics.
Here is what that looks like in practice.
Separate the measurement layers
Not everything in a marketing programme can be measured with the same confidence. Paid search conversion tracking is highly reliable. The contribution of a brand awareness campaign to six months of organic growth is not. A good ROI report distinguishes between what was measured directly, what was modelled, and what was estimated. Collapsing all three into a single ROAS number is how you end up with a confident-looking report that is quietly misleading.
Include a baseline assumption
What would have happened without the marketing activity? This is the question most reports skip entirely. Even a rough baseline, based on seasonality, historical trend, or category growth, gives you something to compare against. Without it, you cannot distinguish between marketing that caused growth and marketing that ran alongside growth that was happening anyway.
Measure what matters to the business, not just what the platform reports
Platform-reported metrics are a perspective on reality, not reality itself. I have spent enough time managing large ad budgets to know that platform dashboards are optimised to make the platform look good. That does not mean the data is useless. It means it needs to be cross-referenced against business data: actual revenue, customer acquisition cost, lifetime value, repeat purchase rate.
When I was running performance strategy at iProspect, one of the most useful disciplines we built was reconciling platform-reported conversions against actual CRM data. The gaps were instructive. Sometimes the platform was overcounting. Sometimes it was undercounting. Either way, the reconciliation told you something the dashboard alone could not.
Build in the unmeasured
Brand equity, share of voice, category consideration, word of mouth, these things are real. They affect revenue. They are difficult to measure with precision, but that does not mean they should be absent from an ROI report. The honest approach is to include them with appropriate caveats, rather than pretending only what is measurable is what matters.
Marketing mix modelling, done properly, can help here. It uses statistical regression to estimate the contribution of different channels, including offline and brand activity, to revenue outcomes over time. It is not perfect. It is a better approximation than last-click attribution, and it forces you to think about the full picture rather than just the trackable slice.
The Reporting Behaviours That Erode Trust Over Time
There is a version of ROI reporting that is essentially a defence document. It exists to justify the budget that was spent, not to inform the allocation of the next one. I have sat in enough board presentations to recognise it immediately. The numbers are always up. The attribution is always favourable. The caveats are buried or absent.
This approach works in the short term. It keeps clients happy and budgets intact. Over time, it destroys credibility, because the business eventually notices that the marketing numbers and the business numbers are not telling the same story.
The CFO who approved a 40% increase in digital spend two years ago and has not seen a corresponding lift in revenue is not going to keep approving increases. And when the cuts come, they often come indiscriminately, because nobody built the trust or the evidence base to defend the activities that were actually working.
The alternative is harder in the short term and far more valuable over time. It means presenting results honestly, including the parts that did not work. It means building a track record of accurate forecasting, not just retrospective justification. And it means treating the ROI report as a tool for improving decisions, not a document for protecting positions.
For teams thinking about how to structure this kind of honest measurement practice, Vidyard’s analysis of why go-to-market execution feels harder than it used to touches on a related challenge: the gap between what marketing teams are measuring and what the business actually needs to know.
What a Good ROI Report Actually Looks Like
I will give you a concrete structure. This is not a template in the sense of fill-in-the-blanks. It is a framework for thinking about what belongs in a report that is genuinely useful.
Business context first. What was the commercial objective? What was the market doing? What happened to the category? Without this, the marketing results have no frame of reference.
Measured outcomes. What did the business actually achieve? Revenue, new customers acquired, customer acquisition cost, repeat purchase rate. These are business numbers, not media numbers.
Channel contribution, with confidence levels. Which channels contributed to those outcomes? At what confidence level? Direct measurement, modelled estimate, or informed assumption? Be explicit about which is which.
Incrementality evidence. Where you ran holdout tests or natural experiments, report the incremental results. Where you did not, say so, and note what you would need to do to find out.
What did not work. This is the section most reports omit. It is also the most useful. If a channel underperformed, say so and explain the hypothesis for why. If a creative direction did not land, document it. This is where the learning lives.
Recommendations. Based on the evidence, what should change about the next period’s allocation? This is the reason the report exists.
Customer feedback tools like Hotjar’s feedback and growth loop framework can add a qualitative layer to this kind of reporting, particularly when you are trying to understand why conversion rates moved in a direction the quantitative data alone cannot explain.
The Relationship Between ROI Reporting and Budget Confidence
One of the most consistent patterns I have seen across agency and client-side work is this: the businesses that invest in honest measurement get more budget over time, not less. Not because the numbers are always good, but because the finance team trusts them.
When a CFO or CEO sees a marketing report that acknowledges uncertainty, distinguishes between what was caused and what was correlated, and makes clear recommendations about what to do differently, they treat it as a credible business document. When they see a report that claims every channel exceeded target and every pound delivered exceptional return, they start discounting the whole thing.
The irony is that overselling ROI is one of the most effective ways to erode the budget confidence you are trying to protect. Honest approximation, paired with a clear methodology, builds the kind of trust that survives a bad quarter.
This connects to a broader point about how market penetration strategy interacts with measurement. Growth through new customer acquisition looks very different in an ROI report than growth through retention or upsell, and conflating the two produces reporting that obscures the actual health of the growth engine.
There is a lot more to explore on the mechanics of building measurement into a growth strategy from the start. The articles on go-to-market and growth strategy here at The Marketing Juice cover the strategic architecture that makes honest ROI reporting possible, rather than treating it as a retrofit at the end of a campaign.
The Effie Lesson: Effectiveness Is Not the Same as Efficiency
Judging the Effie Awards gave me a useful lens on this. The Effies are the industry’s most rigorous effectiveness awards. Entries have to demonstrate business outcomes, not just campaign metrics. They have to show the logic connecting the activity to the result.
What struck me reviewing entries was how often the most efficient campaigns, the ones with the best ROAS or the lowest cost-per-acquisition, were not the most effective ones. Effectiveness, in the Effie sense, means moving the business forward in a meaningful way: growing market share, changing brand perception, acquiring genuinely new customers, not just harvesting existing intent more cheaply.
Efficiency is a useful operational metric. It tells you whether you are getting value from the spend you have committed. Effectiveness is the more important question: is the marketing programme actually growing the business, or is it optimising the extraction of value from a customer base that brand and word of mouth are sustaining?
Most ROI reports measure efficiency. Very few measure effectiveness in this deeper sense. The ones that do tend to be produced by marketing teams that have earned a seat at the commercial table, because they are asking the same questions the business is asking, not just defending their channel allocations.
For organisations thinking about how to structure go-to-market measurement from a commercial transformation perspective, BCG’s work on launch strategy and commercial planning offers a useful framework for connecting marketing investment to business stage and growth objectives.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
