ROI Models Are Lying to You. Here’s Why You Keep Trusting Them
ROI models are the most trusted and most misleading artefact in marketing. They give you a number, which feels like certainty, but what they actually give you is a structured set of assumptions dressed up as fact. The model is only as honest as the inputs, and most inputs are chosen by people with a vested interest in a particular outcome.
That is not cynicism. That is just how incentives work in organisations where marketing budgets are contested, channels compete for credit, and agencies are paid on performance. If you want to use ROI models well, you need to understand what they measure, what they ignore, and why the gap between those two things is where most bad decisions get made.
Key Takeaways
- ROI models measure what is trackable, not what is valuable. The two are rarely the same thing.
- Most lower-funnel ROI looks strong because it captures demand that already existed, not because it created new demand.
- Attribution models are a political document as much as an analytical one. Who controls the model controls the budget narrative.
- Honest ROI modelling requires incrementality thinking: would this revenue have happened without the spend?
- A single ROI model across all channels and funnel stages is not a measurement framework. It is a fiction agreed upon by committee.
In This Article
- Why ROI Models Feel Rigorous But Often Are Not
- What ROI Models Actually Measure
- The Incrementality Problem
- How Attribution Becomes a Political Document
- Marketing Mix Modelling: Useful, Not Definitive
- What Honest ROI Modelling Looks Like in Practice
- The Conversation Most Marketing Teams Are Not Having
- What to Do With an Imperfect Model
This article is part of a broader set of thinking on Go-To-Market and Growth Strategy, where I write about the commercial decisions that actually determine whether marketing budgets produce business results or just produce activity.
Why ROI Models Feel Rigorous But Often Are Not
Early in my career I was an enthusiastic advocate for performance marketing ROI. The numbers were clean. The dashboards were compelling. You could sit in a client meeting and point to a return on ad spend figure and watch the room relax. It felt like accountability. It felt like proof.
What I did not fully appreciate then was that I was measuring what was measurable, not what was causal. Those are completely different things. A customer who clicks a paid search ad and converts was already looking for what you sell. The search was the last step in a decision that started somewhere else, maybe weeks earlier, maybe with a brand impression they cannot recall, maybe with a word-of-mouth recommendation that never appears in any model. The paid search ad gets the credit. The ROI looks exceptional. And the budget goes into paid search, year after year, because the model says it works.
I have managed hundreds of millions in ad spend across more than thirty industries. The pattern repeats everywhere. Lower-funnel channels consistently report strong ROI because they are closest to the transaction. Upper-funnel investment consistently struggles to justify itself because the measurement is harder. So brands systematically underinvest in the work that builds demand and overinvest in the work that captures it. The model is not wrong, exactly. It is just measuring the wrong thing and being treated as if it is measuring the right thing.
Think about a clothes shop. Someone who tries something on is dramatically more likely to buy than someone who just browses. But the fitting room did not create the desire to buy. Something else did: the window display, the brand reputation, the recommendation from a friend. If you only measured the fitting room, you would conclude it was the most important part of the business. You would be technically correct and commercially wrong.
What ROI Models Actually Measure
A standard ROI model calculates return as revenue generated divided by cost of investment, expressed as a ratio or percentage. Simple in theory. The problem is in the definition of “revenue generated,” which requires an attribution decision, and attribution decisions are where the model quietly embeds its assumptions.
Last-click attribution assigns all revenue credit to the final touchpoint before conversion. It is still the default in many organisations, despite being widely understood to be misleading. First-click attribution overcorrects in the opposite direction. Linear attribution spreads credit equally across all touchpoints, which sounds fair but is rarely accurate. Time-decay models weight recent touchpoints more heavily, which has some logic but still does not tell you which touchpoints were actually causal.
Data-driven attribution, which uses machine learning to assign credit based on observed patterns, is better. But it is still a model, which means it is still an approximation, and it still struggles with channels that operate outside the digital ecosystem: out-of-home, broadcast, word-of-mouth, PR, events. Those channels often do enormous amounts of commercial work that simply does not show up in the attribution model. So they look unproductive. And they get cut.
When I was judging the Effie Awards, one of the recurring themes in the strongest entries was the explicit acknowledgement that measurement was imperfect, combined with a rigorous attempt to isolate what was actually driving business outcomes rather than what was easiest to track. The campaigns that won were not the ones with the cleanest dashboards. They were the ones where the teams had thought hardest about causality, not just correlation.
The Incrementality Problem
Incrementality is the question that most ROI models do not ask. It is: would this revenue have happened without the spend? If the answer is yes, or even probably yes, then the ROI figure is flattering a channel that is harvesting existing demand rather than creating new demand. The spend may still be justified, because capturing demand has value. But it is not the same value as creating demand, and conflating the two leads to systematically bad portfolio decisions.
The only rigorous way to measure incrementality is through controlled experiments: hold-out tests, geo-split tests, matched market tests. These are harder to run than pulling a dashboard report. They take longer. They require organisational patience and a willingness to accept that some of your best-looking channels may look considerably less impressive when you strip out the revenue that was going to happen anyway.
I have seen brands run hold-out tests on their branded paid search campaigns and discover that switching off the spend had almost no effect on conversion volume. The people searching for the brand by name were going to find the brand anyway. The paid search ads were collecting revenue, not generating it. The ROI model said the channel was performing. The incrementality test said the channel was largely redundant. Both were measuring something real. Only one was measuring something useful.
This is not an argument against paid search. It is an argument for knowing what your spend is actually doing before you build a budget case around a model that may be telling you a comfortable story rather than an accurate one. Vidyard’s analysis of why go-to-market feels harder touches on a related issue: the measurement frameworks many teams use were built for simpler buying journeys than the ones they are now operating in.
How Attribution Becomes a Political Document
Attribution is not just a technical question. It is a political one. Whoever controls the attribution model controls the budget narrative. In agency relationships, this dynamic is particularly acute. An agency paid on performance has a structural incentive to define performance in a way that flatters their channels. I have been on both sides of this. Running agencies, I was aware of the pressure. Managing agency relationships on behalf of clients, I saw how it played out in reporting that was technically accurate but strategically misleading.
The most common version of this is an agency reporting channel-level ROI using a model where their channel receives disproportionate credit. The numbers look strong. The client renews. The budget increases. And the underlying business may or may not be growing, because no one has asked whether the total marketing investment is generating incremental revenue or just redistributing existing demand across a more expensive set of channels.
This is not a reason to distrust agencies. It is a reason to own your measurement framework rather than outsourcing it. The attribution model should be defined by the client, or at minimum agreed jointly, with explicit rules about how cross-channel credit is handled. If your agency is setting the attribution rules and reporting against them, you have a conflict of interest built into your measurement infrastructure.
Semrush’s overview of market penetration strategy makes a point worth connecting here: growth through new customer acquisition requires different investment logic than growth through existing customer conversion, and a single ROI model applied across both will systematically favour the latter because the measurement is easier and the returns appear faster.
Marketing Mix Modelling: Useful, Not Definitive
Marketing mix modelling (MMM) is the most sophisticated tool most organisations have for understanding the contribution of different channels to overall business outcomes. It uses econometric techniques to decompose revenue into its constituent drivers: base sales, price effects, distribution, seasonality, and marketing investment by channel. Done well, it captures offline channels that attribution models miss, accounts for diminishing returns, and gives you a portfolio view rather than a channel-level view.
It is also expensive, slow, backward-looking, and dependent on data quality that many organisations cannot reliably provide. The model is only as good as the inputs, and if your sales data, spend data, and external variables are inconsistent or incomplete, the model will produce confident-looking outputs based on unreliable foundations.
The better MMM practitioners I have worked with are explicit about the uncertainty ranges in their outputs. They present scenarios rather than single-point estimates. They flag where the model is extrapolating beyond the observed data range. That kind of intellectual honesty is rare, because clients often want a definitive answer and the temptation is to give them one, even when the data does not support it.
BCG’s work on go-to-market strategy highlights a principle that applies directly here: the quality of a strategic decision is determined by the quality of the assumptions feeding it, not by the sophistication of the analytical framework wrapping those assumptions. A complex model built on weak assumptions is not better than a simple model built on strong ones. It is just more expensive and harder to challenge.
What Honest ROI Modelling Looks Like in Practice
Honest ROI modelling starts with a clear statement of what the model can and cannot measure. That sounds obvious. It almost never happens. Most ROI presentations lead with the outputs and bury the assumptions, if they surface them at all. Reversing that order, leading with the assumptions and being explicit about the gaps, changes the quality of the conversation considerably.
Second, honest modelling separates demand creation from demand capture. These require different measurement approaches and different return expectations. Demand capture (paid search, retargeting, conversion rate optimisation) should generate measurable short-term returns. Demand creation (brand advertising, content, PR, social) operates on longer timescales and requires different proxies: brand awareness, share of search, consideration metrics, new customer acquisition rates over time. Applying a short-term ROI lens to long-term demand creation investment is not rigorous measurement. It is a category error.
Third, honest modelling builds in incrementality testing as a routine practice rather than a one-off exercise. You do not need to run hold-out tests on every channel simultaneously. You run them systematically over time, building a picture of what is genuinely incremental and what is not. This is operationally demanding. It is also the only way to know whether your ROI model is telling you something true.
Crazyegg’s breakdown of growth hacking approaches makes a useful distinction between optimising existing conversion paths and finding genuinely new growth levers. The same distinction applies to ROI modelling: optimising within a flawed measurement framework makes the framework more efficient, not more accurate.
Fourth, honest modelling acknowledges that some of the most commercially important marketing activity is the hardest to measure. Brand equity, earned media, category leadership, employer brand effects on talent acquisition costs: these are real business outcomes that rarely appear in an ROI model because they are difficult to quantify in the short term. Excluding them from the model does not make them less real. It makes the model less complete.
The Conversation Most Marketing Teams Are Not Having
When I took over leadership at iProspect and started growing the team from around twenty people to eventually over a hundred, one of the things that became clear quickly was that the measurement conversations we were having with clients were not the measurement conversations they actually needed. We were very good at reporting channel performance. We were less good at helping clients understand whether their total marketing investment was generating business growth or just generating marketing activity.
That distinction matters enormously. A brand can have excellent channel-level ROI across every line of its marketing plan and still be losing market share, because the channels are all optimised for conversion efficiency rather than audience reach. The model says everything is working. The business says something is wrong. Both are right. The model is just measuring the wrong thing.
The conversation most marketing teams are not having is about the total portfolio of investment and whether it is balanced correctly between building future demand and capturing current demand. Semrush’s analysis of growth tools is useful context here: the tools that help you optimise existing funnels are abundant and sophisticated. The frameworks that help you think about whether you are investing in the right funnels in the first place are rarer and less celebrated.
Forrester’s research on go-to-market struggles in complex industries points to a consistent theme: organisations that over-rely on lower-funnel measurement tend to underinvest in the earlier-stage market development work that creates the conditions for lower-funnel efficiency. The ROI model looks healthy right up until the pipeline dries up.
If you are building or refining your growth strategy, the full framework for thinking about these decisions is in the Go-To-Market and Growth Strategy hub, where I cover the commercial logic behind how marketing investment decisions should be structured across different growth stages and market contexts.
What to Do With an Imperfect Model
You are not going to achieve perfect measurement. No one has. The goal is honest approximation rather than false precision. A model that is transparent about its limitations and calibrated against incrementality data is considerably more useful than a model that produces clean outputs by hiding its assumptions.
Use multiple measurement approaches in parallel: attribution modelling for channel-level optimisation, MMM for portfolio-level budget allocation, incrementality testing for validating both. Treat the outputs as perspectives rather than verdicts. When they agree, you have higher confidence. When they disagree, you have a more interesting question to investigate.
Build explicit budget allocations for activity that your model cannot measure well. If you only fund what you can measure, you will systematically defund the work that builds long-term commercial value. That is not a measurement problem. It is a strategic problem that measurement is enabling.
And be honest with your leadership about what the model is and is not telling you. The temptation to present ROI figures as more certain than they are is understandable, particularly in organisations where marketing budgets are under pressure and the finance function wants proof of return. But presenting false certainty creates a fragile case. When the model’s predictions diverge from business outcomes, and they will, you have no credibility left to explain why. Presenting honest approximation, with explicit assumptions and known gaps, is a harder conversation in the short term and a much stronger position over time.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
