ROI Models Are Lying to You. Here’s Why You Build Them Anyway

An ROI model is a structured framework that estimates the financial return from a marketing investment before, during, or after a campaign. At its most useful, it connects spend to revenue through a chain of measurable assumptions: reach, conversion rate, average order value, margin, payback period. At its least useful, it is a spreadsheet built backwards from the number a stakeholder wanted to see.

The goal of a good ROI model is not precision. It is honest approximation. You will never know exactly what your marketing caused. What you can do is build a model rigorous enough to make better decisions and transparent enough to know when the assumptions are breaking down.

Key Takeaways

  • ROI models are decision tools, not measurement instruments. Their value is in forcing explicit assumptions, not producing exact numbers.
  • Most ROI models over-credit lower-funnel activity because that is where attribution is easiest, not where value is actually created.
  • A model built backwards from a target return is not an ROI model. It is a justification document with a formula attached.
  • The most dangerous number in any ROI model is the one nobody questioned in the room.
  • Incrementality, not attribution, is the right question. The issue is not who gets the credit, it is whether the activity changed the outcome at all.

I have sat in rooms where ROI models were treated as gospel and rooms where they were waved away as fiction. Both instincts are wrong. The discipline of building one, stress-testing it, and being honest about where the assumptions are weak is one of the most commercially valuable things a marketing team can do. It forces a conversation that most organisations avoid.

What Does an ROI Model Actually Measure?

Return on investment in marketing is deceptively simple in theory. You spend X, you get back Y, and the difference is your return. The complexity arrives the moment you try to define what “got back” means and how much of it was caused by the marketing rather than by everything else happening at the same time.

A marketing ROI model typically estimates one or more of the following: revenue generated from a specific campaign or channel, cost per acquisition across the funnel, customer lifetime value relative to acquisition cost, payback period on a cohort of new customers, or incremental revenue above a baseline.

The word “incremental” is doing a lot of work in that last point. Incrementality is the right question and most ROI models do not ask it clearly enough. They measure what happened, not what would have happened without the activity. That gap is where most marketing ROI gets overstated.

Earlier in my career I was guilty of this. I over-weighted lower-funnel performance because it was the easiest thing to measure and it produced the most flattering numbers. Paid search, retargeting, branded keywords. The attribution tools loved them. The problem is that a significant portion of what those channels were credited for was going to happen anyway. Someone who had already decided to buy your product typed your brand name into Google. You paid for that click. The model called it a conversion. That is not return on investment, it is return on confirmation bias.

If you want to think more broadly about how ROI modelling fits into commercial growth planning, the Go-To-Market and Growth Strategy hub covers the strategic context that makes individual models more useful and more honest.

How Do You Build an ROI Model That Is Actually Useful?

The structure of a useful marketing ROI model follows a logical chain. Each link in the chain is an assumption. The job is to make every assumption explicit, defensible, and visible to the people making decisions based on the output.

Start with the inputs. What are you spending, across which channels, over what time period? Include media spend, production costs, agency fees, technology costs, and any internal resource that has a real opportunity cost. Most models undercount this side of the equation because some costs are harder to attribute to a specific campaign.

Then build the funnel. Impressions or reach, click-through or engagement rate, landing page conversion rate, lead-to-sale conversion rate, average order value or contract value, gross margin. Each of these is a number you either know from historical data, can estimate from industry benchmarks, or have to be honest about not knowing yet.

The output is a projected revenue figure, a projected margin contribution, and a payback timeline. From those you can calculate return on ad spend, marketing ROI, cost per acquisition, and customer lifetime value to acquisition cost ratio.

What separates a useful model from a decorative one is what happens next. You build three scenarios: conservative, base case, and optimistic. You stress-test the most sensitive assumptions. You ask which single input, if it were wrong by 20%, would flip the model from viable to unviable. That input deserves the most scrutiny before you commit budget.

When I was running agency P&Ls, the models that failed us were almost always the ones where conversion rate assumptions had been copied from a previous campaign without checking whether the audience, offer, or competitive context was comparable. A 4% conversion rate from a warm email list tells you almost nothing about what to expect from cold paid social. The number looks credible because it came from real data. The problem is it came from the wrong data.

What Are the Most Common Ways ROI Models Break Down?

There are predictable failure modes in marketing ROI modelling. Recognising them is more useful than any template.

The first is reverse engineering. Someone decides the campaign needs to show a 4x return to get approved. The model is then built to produce that number, with assumptions adjusted until the output is acceptable. This is not analysis. It is advocacy dressed as analysis. I have seen this happen in agencies pitching for budget and in client-side teams defending their plans to finance. The tell is that the assumptions are never questioned because questioning them would undermine the conclusion.

The second is attribution inflation. Last-click, first-click, and even multi-touch attribution models systematically overstate the contribution of certain channels. Lower-funnel channels capture intent that was created elsewhere. Growth-focused teams who rely entirely on platform-reported ROAS are measuring what the platform wants them to measure, not what actually drove business outcomes. If your paid social platform is telling you it generated 8x return and your revenue has not moved, the model is wrong, not the revenue.

The third is ignoring time. A model that shows positive ROI over 30 days may show negative ROI over 90 days if customer churn is high. A model that looks weak over 90 days may look strong over 24 months if customer lifetime value is high. The time horizon is not a neutral technical choice. It is a strategic one, and it should be chosen based on the business reality, not the reporting cycle.

The fourth is omitting baseline. If your market is growing at 15% annually and your revenue grew 12%, your marketing did not produce growth. It underperformed the market. An ROI model that does not account for what would have happened without the activity is measuring the wrong thing.

The fifth, and most insidious, is the unchallenged assumption. Every model has one number that nobody in the room questioned because it sounded reasonable, came from a credible source, or was put there by someone senior. That number is usually the one that breaks everything when reality arrives. I have made this mistake. The most expensive lesson I took from managing large media budgets was that the assumptions you do not stress-test are the ones that will cost you.

How Should You Handle Attribution Within an ROI Model?

Attribution is not the same as causation. This distinction matters enormously when you are building an ROI model and it is one of the most consistently misunderstood points in marketing measurement.

Attribution models answer the question: which touchpoints were present before a conversion? Causation answers the question: which touchpoints changed whether a conversion happened at all? These are different questions with different answers, and confusing them leads to systematically misallocating budget.

Think of it this way. Someone sees your brand video on YouTube, reads a review on a comparison site, clicks a retargeting ad, and then searches your brand name and converts through paid search. Your attribution model will distribute credit across some or all of those touchpoints depending on the model you use. But the real question is: would that person have converted anyway? If they were already in-market and already aware of your brand, the retargeting ad and the branded search term captured intent they already had. The YouTube video may have created the intent in the first place. Or none of it mattered and they converted because a colleague recommended you.

There is a useful analogy here. A clothes shop knows that someone who tries on a garment is far more likely to buy it than someone who just browses. But the act of trying it on does not cause the purchase in isolation. The person who walked into the shop, found the item, and decided it was worth trying on had already done most of the work. The fitting room gets the credit. The window display, the location, the brand reputation, and the pricing did the heavy lifting.

The practical implication for ROI models is that you need to separate attribution reporting from incrementality thinking. Use attribution to understand the experience. Use incrementality testing, holdout groups, and media mix modelling to understand what actually drove outcomes. They are complementary tools, not interchangeable ones.

Forrester has written extensively about the structural challenges in go-to-market measurement, including how attribution gaps distort planning in complex sales environments. The same dynamics apply in most B2B and considered-purchase B2C categories.

What Role Does Upper-Funnel Activity Play in an ROI Model?

This is where most ROI models fall apart, and where most marketing teams underinvest as a result.

Upper-funnel activity, brand awareness, reach, consideration, does not convert neatly into a spreadsheet. There is no clean click trail from a TV spot to a purchase. There is no last-click attribution for the moment someone first heard your brand name and filed it away for future reference. So most ROI models either ignore upper-funnel spend entirely or treat it as a cost without a return.

This creates a structural bias. The channels that are easiest to measure look most efficient. The channels that do the hardest work, reaching new audiences and building the mental availability that makes lower-funnel activity possible, look like waste. Over time, teams that manage to ROI models cut upper-funnel spend and wonder why their lower-funnel efficiency starts declining. They are harvesting demand they stopped creating.

I spent years watching this pattern play out. Agencies win new business by promising measurable performance. They optimise toward what is measurable. The client sees strong ROAS numbers for 18 months and then revenue growth stalls. The diagnosis is usually that the brand has been living off existing demand and has stopped reaching new audiences. The ROI model looked healthy right up until the moment the pipeline ran dry.

A more honest ROI model accounts for upper-funnel investment as a necessary input to the whole system, not a discretionary add-on. It uses brand tracking, share of search, and market penetration data to estimate the contribution of awareness activity, even if that estimate is imprecise. Honest approximation is more useful than false precision.

BCG’s work on go-to-market strategy and product launch planning makes a similar point in a different context: the investments that build market position rarely show clean short-term returns, but they determine whether the short-term returns are available at all.

How Do You Present an ROI Model to Senior Stakeholders?

The technical quality of your model matters less than your ability to communicate it clearly and defend it honestly. Senior stakeholders are not usually interested in the mechanics. They want to know: what are we betting on, what does success look like, and what happens if we are wrong?

Lead with the assumptions, not the output. Show the three or four numbers that drive the model and explain where they came from. Historical data, industry benchmarks, or informed estimates. Be explicit about which ones are most uncertain. This builds credibility because it demonstrates that you have thought about the model rather than just produced it.

Present scenarios, not a single number. A base case with a conservative and an optimistic range is more defensible and more useful than a point estimate that will be wrong. It also frames the conversation around risk tolerance rather than prediction accuracy.

Be clear about what the model does not include. If you have not modelled the contribution of brand activity, say so. If the model assumes flat market conditions and you are in a volatile category, say so. The things you have excluded are often the most important things for a senior stakeholder to know.

Early in my agency career I watched a founder hand me the whiteboard pen mid-brainstorm and walk out to a client meeting. The instinct in that moment was to produce something impressive. The better instinct, which I learned over time, was to produce something honest and defensible. Impressive models that cannot survive a question are worse than simple models that can. The same principle applies when you are presenting an ROI model to a CFO or a board.

If you want to understand how ROI modelling connects to broader commercial planning, including channel strategy, budget allocation, and growth frameworks, the Go-To-Market and Growth Strategy hub is the right place to start. The model is only as good as the strategy it is built to evaluate.

What Metrics Should Anchor a Marketing ROI Model?

The metrics you choose to anchor an ROI model reveal the assumptions underneath it. Choose the wrong anchors and the model will optimise for the wrong outcomes.

Return on ad spend is the most commonly used metric and the most commonly misused. It measures revenue per pound of media spend, but it does not account for margin, it does not account for incrementality, and it does not account for the costs outside the media budget. A 6x ROAS on a product with 15% margin is not a good result. A 2.5x ROAS on a high-margin product with strong retention may be excellent.

Customer acquisition cost is more useful when paired with customer lifetime value. The ratio between the two tells you whether you are building a sustainable business or buying customers at a loss and hoping volume compensates. In subscription and repeat-purchase models, a CAC that looks expensive over 90 days may look cheap over 24 months. The model needs to reflect the actual economics of the business, not just the immediate transaction.

Contribution margin per channel is underused and underrated. It asks how much gross profit each channel generates after accounting for all the costs associated with it, including production, technology, and fulfilment. It is harder to calculate than ROAS but it is much closer to what actually matters to the business.

Payback period matters more than most models acknowledge. If it takes 18 months to recover the cost of acquiring a customer, you need to be confident that customer will stay for at least that long and that your business has the cash flow to sustain the gap. Growth teams that focus exclusively on acquisition efficiency without modelling payback period often discover the problem too late.

The most important metric in any ROI model is the one that most directly reflects the business outcome you are trying to drive. For a subscription business that is probably net revenue retention. For a retailer it might be repeat purchase rate. For a B2B business it could be pipeline value or closed revenue by cohort. The model should be built around the metric that matters to the business, not the metric that is easiest to pull from the analytics platform.

When Should You Update or Rebuild an ROI Model?

An ROI model is not a document you build once and file. It is a living tool that should be updated as assumptions are tested against reality and as the business context changes.

Update it when actual performance diverges significantly from the model. Not to make the model match reality retrospectively, but to understand which assumptions were wrong and why. That learning is worth more than the original model.

Update it when the competitive environment changes. A conversion rate assumption built during a period of low competition will not hold when a well-funded competitor enters the market. A cost-per-click assumption built in a low-demand period will not hold during peak season or when platform costs shift.

Update it when the product, pricing, or offer changes. The model is built on a set of business conditions. When those conditions change, the model changes with them.

Rebuild it from scratch when the strategy changes fundamentally. If you are moving from a direct-to-consumer model to a channel-led model, or from a single-market to a multi-market approach, the existing model’s assumptions are probably not transferable. Starting fresh is often faster than trying to retrofit a model built for different conditions.

The teams I have seen manage ROI models well treat them the way a good CFO treats a financial forecast: as a working hypothesis that is updated regularly, stress-tested against incoming data, and used to make decisions rather than justify them. The teams that manage them badly treat them as a one-time deliverable that goes into a deck and is never looked at again.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between ROAS and marketing ROI?
Return on ad spend measures revenue generated per unit of media spend. Marketing ROI is broader: it accounts for all marketing costs including production, agency fees, and technology, and it typically measures profit contribution rather than gross revenue. ROAS is a useful efficiency metric but it can be misleading if used as a proxy for overall marketing return, particularly in low-margin categories or where significant costs sit outside the media budget.
How do you account for brand activity in a marketing ROI model?
Brand and upper-funnel activity rarely produce clean attribution trails, which means most ROI models either ignore them or treat them as pure cost. A more honest approach uses proxy metrics: brand tracking data, share of search trends, and market penetration rates to estimate the contribution of awareness investment over time. Media mix modelling can also decompose revenue into contributions from different activity types, including brand, though it requires sufficient data volume to be reliable.
What is incrementality and why does it matter for ROI models?
Incrementality measures whether a marketing activity changed an outcome that would not have occurred without it. It is distinct from attribution, which only measures which touchpoints were present before a conversion. Most standard ROI models measure attributed revenue rather than incremental revenue, which means they overstate the return from channels that capture existing intent, such as branded search and retargeting, and understate the return from channels that create new demand.
How many scenarios should an ROI model include?
Three scenarios is the standard approach: conservative, base case, and optimistic. The conservative scenario should reflect what happens if the two or three most uncertain assumptions perform below expectation. The optimistic scenario should reflect realistic upside, not best-case fantasy. The gap between conservative and optimistic gives stakeholders a clearer picture of the risk range than a single point estimate, and it forces the team to identify which assumptions drive the most variance.
How often should a marketing ROI model be updated?
At minimum, an ROI model should be reviewed quarterly against actual performance and updated when assumptions are shown to be materially wrong. It should also be updated when competitive conditions change, when the product or pricing changes, or when platform costs shift significantly. A model that is built once and never revisited is not a planning tool. It is a justification document that becomes less accurate over time and more dangerous as a basis for budget decisions.

Similar Posts