Media Mix Models: What They Tell You and What They Don’t
A media mix model is a statistical technique that uses historical sales and spend data to estimate the contribution of each marketing channel to business outcomes. Done well, it gives you a defensible, channel-level view of what is driving revenue, so you can make better allocation decisions. Done poorly, it gives you false confidence dressed up as econometrics.
The difference between those two outcomes matters more than most marketing teams acknowledge, and it starts with understanding what these models are actually measuring.
Key Takeaways
- Media mix models estimate channel contribution using historical data, but they reflect the past, not the future. Markets shift, and a model trained on last year’s spend will not automatically account for new competitive dynamics or audience behaviour changes.
- Most MMMs underweight brand and upper-funnel activity because long-term effects are harder to isolate. This creates a systematic bias toward short-term, lower-funnel channels that appears rational but erodes brand equity over time.
- The quality of your model output is directly constrained by the quality and granularity of your input data. Garbage in, confident-looking garbage out.
- MMM works best as one input into a broader measurement framework, not as the single source of truth. Triangulating with incrementality testing and attribution data gives you a more honest picture.
- The most common mistake is treating model outputs as precise allocations rather than directional signals. Optimising to the decimal point on a model with wide confidence intervals is a form of measurement theatre.
In This Article
- Why Media Mix Modelling Is Back in the Conversation
- How a Media Mix Model Actually Works
- What MMM Gets Right
- Where Media Mix Models Fall Short
- The Measurement Framework MMM Belongs In
- The Practical Questions to Ask Before Commissioning an MMM
- The Danger of Optimising to the Model
- Open-Source vs Vendor MMM: What the Shift Means
- What Good Looks Like in Practice
Why Media Mix Modelling Is Back in the Conversation
MMM is not new. Econometricians were building these models for FMCG brands in the 1980s. What is new is the renewed urgency around them, driven by three converging pressures: the deprecation of third-party cookies, the fragmentation of media across platforms that do not share data with each other, and a growing scepticism about last-click attribution that has been a long time coming.
When I was running iProspect and growing the team from around 20 people to over 100, the attribution conversation was dominated by platform-reported numbers. Google Analytics said one thing. The ad platforms said another. The finance team had a third figure. Everyone was right according to their own methodology, and none of it reconciled. MMM offered something those systems could not: a view of the business that did not depend on pixel-level tracking and was not reported by the same platforms being measured.
That independence is still the core appeal. If you want to understand whether your TV spend is doing anything useful, you cannot ask Google. You need a model that sits outside the platform ecosystem entirely.
If you are thinking about where MMM sits within a broader go-to-market measurement approach, the Go-To-Market and Growth Strategy hub covers the wider planning and commercial context that these models need to operate within.
How a Media Mix Model Actually Works
The mechanics are worth understanding, even if you are not building the model yourself. At its core, MMM uses multivariate regression to separate the variance in sales into components attributable to different inputs: paid media channels, pricing, distribution, seasonality, macroeconomic conditions, and competitive activity where data is available.
The model needs a long run of historical data, typically two to five years of weekly observations, to identify patterns with any statistical confidence. It then estimates the incremental sales contribution of each channel, controlling for the other variables in the model. The output is usually expressed as a return on investment figure per channel, or as a decomposition showing what percentage of sales came from each driver.
Two concepts matter a lot here. The first is adstock, which captures the idea that advertising effects decay over time rather than switching off the moment spend stops. A TV campaign run in October may still be influencing purchase decisions in December. The second is diminishing returns, which models the reality that each additional pound spent on a channel generates less incremental revenue than the one before it. Both of these are modelled with assumptions, and those assumptions shape the outputs significantly.
This is where I would push any team commissioning an MMM to ask harder questions. The adstock decay rates and saturation curves are not facts discovered by the model. They are parameters that need to be calibrated, often using prior knowledge or Bayesian priors. A model that assumes a long adstock for digital display and a short one for paid search will produce very different allocation recommendations from one that assumes the reverse. Neither is necessarily correct.
What MMM Gets Right
There are things media mix models do genuinely well, and it is worth being clear about them before getting to the limitations.
First, they provide a cross-channel view that no single attribution system can. If you are spending across TV, outdoor, paid social, paid search, and email, no pixel-based system will connect those dots accurately. MMM treats the business as a system and estimates how the whole is greater or lesser than the sum of its parts.
Second, they capture offline media effects that are invisible to digital attribution. This is particularly valuable for brands that still invest meaningfully in broadcast, print, or out-of-home. When I have sat with finance directors reviewing marketing budgets, the question about TV is almost always the same: “What is it actually doing?” MMM is often the most credible answer available.
Third, they are not subject to the self-serving bias of platform-reported attribution. Google’s attribution model will tend to credit Google channels. Meta’s will credit Meta. MMM has no such incentive. It is measuring against external business outcomes, not platform metrics.
Fourth, when built with sufficient data and rigour, MMM can identify the shape of the response curve for each channel. Knowing that your paid social spend starts to plateau at a certain weekly threshold is genuinely useful for budget planning. It tells you where you are on the curve and where the next pound is most efficiently deployed.
Where Media Mix Models Fall Short
This is the section that tends to get left out of vendor pitches, which is exactly why it matters.
The most significant structural limitation is that MMM measures the past. It tells you what drove sales given the media mix, competitive environment, and market conditions that existed during the modelling period. If any of those change, the model’s recommendations may not hold. A brand that shifts from a mature, stable category to a faster-moving competitive environment will find that last year’s model gives increasingly unreliable guidance.
The second limitation is the systematic undervaluation of brand and upper-funnel activity. Long-term brand effects are genuinely difficult to isolate in a regression model. The relationship between a brand campaign run eighteen months ago and a purchase decision today involves too many intervening variables to attribute cleanly. As a result, most MMMs will show a weaker ROI for brand-building activity than for performance channels, not because brand does not work, but because the model cannot see it clearly. This creates a bias that, if acted on uncritically, leads to progressive underinvestment in brand and an over-reliance on lower-funnel spend.
I spent a significant part of my early career overvaluing lower-funnel performance metrics. The numbers looked clean, the attribution was direct, and the case for investment was easy to make in a boardroom. It took a few years of watching brands thin out their upper-funnel spend and then wonder why their paid search efficiency was declining to understand what was actually happening. Much of what performance marketing captures was going to happen anyway. The demand was already created, by brand, by word of mouth, by category growth. You were just the last touchpoint before the purchase.
Third, MMM requires high-quality, consistent input data. If your spend data has gaps, if your sales data is not clean, if you cannot separate promotional pricing effects from media effects, the model will struggle. The confidence intervals on the outputs will be wide, but the numbers will still look precise. That precision is an illusion.
Fourth, most MMMs are built at a relatively aggregate level. They will tell you that paid social contributed X percent of sales, but they will not tell you which creative drove that contribution, which audience segment responded, or which specific placements were working. For that level of granularity, you need different tools.
The Measurement Framework MMM Belongs In
The honest position on MMM is that it is one measurement perspective, not the measurement answer. The most commercially rigorous approach I have seen treats it as part of a triangulated framework alongside two other methodologies.
Incrementality testing, specifically geo-based or holdout experiments, gives you a direct causal estimate of what a channel is contributing by comparing exposed and unexposed groups. It is slower and more expensive to run than MMM, but the causal logic is cleaner. If you want to know whether your connected TV spend is generating incremental sales, a well-designed geo experiment will give you a more defensible answer than a regression model.
Multi-touch attribution, despite its well-documented limitations, still provides useful signal at the granular level that MMM cannot reach. It is not a reliable guide to budget allocation, but it is useful for understanding creative performance, audience behaviour, and the sequencing of touchpoints within a campaign.
When these three approaches are run in parallel and their outputs are compared rather than treated as competing truths, you get something more useful than any single model can provide: a range of estimates with a sense of where they converge and where they diverge. Where they converge, you can act with confidence. Where they diverge, you have a question worth investigating rather than a number worth optimising to.
This triangulated approach is consistent with how BCG has framed the challenge of aligning marketing and business strategy, where measurement is treated as a strategic input rather than a reporting function.
The Practical Questions to Ask Before Commissioning an MMM
If you are considering building or buying a media mix model, these are the questions that will separate a useful exercise from an expensive one.
How much historical data do you have, and how clean is it? Two years of weekly data across all channels is a reasonable minimum. If your data has significant gaps or inconsistencies, invest in cleaning it before investing in the model.
What are the modelling assumptions, and who set them? Ask specifically about adstock decay rates and saturation curve parameters. If the vendor cannot explain how these were set or validated, that is a problem. Bayesian MMMs, which allow prior knowledge to inform the parameter estimates, are generally more transparent on this point than classical regression approaches.
What is not in the model? Every MMM has blind spots. Competitive spend data is often unavailable. Distribution changes may not be captured. New product launches create structural breaks that models struggle with. Knowing what the model cannot see is as important as knowing what it can.
How will the outputs be validated? A model that has never been tested against reality is just a hypothesis. Holdout tests or back-testing against periods not used in model training are standard validation approaches. If validation is not part of the methodology, the confidence in the outputs should be lower.
How frequently will the model be refreshed? A model built once and used for three years is not a measurement tool. It is a historical document. Markets change, and models need to be updated to remain relevant. Quarterly refreshes are common for brands with sufficient data volume.
The Danger of Optimising to the Model
There is a specific failure mode I have seen in businesses that have invested in MMM and then used it too mechanically. The model says paid search has the highest ROI, so the team shifts budget toward paid search. Paid search efficiency then declines, because the model’s response curve was estimated at lower spend levels and the team has pushed past the saturation point. The model is then updated, and the recommendation changes. The team chases the optimisation, and the budget oscillates without ever settling into a genuinely efficient allocation.
The underlying issue is treating model outputs as precise allocations rather than directional signals. A media mix model can tell you that you are probably over-investing in one channel and under-investing in another. It cannot tell you the exact optimal split to three decimal places, because the confidence intervals on the estimates are wide and the real world is not a controlled experiment.
The most useful thing a model can do is challenge an assumption. When I have seen MMM used well, it has typically been in situations where the output contradicts an existing belief and prompts a genuine investigation. The model says outdoor is contributing more than the team assumed. That is worth testing. The model says video on demand has near-zero incremental contribution at current spend levels. That is worth understanding before the next budget round.
That kind of challenge to established thinking is more valuable than a precise allocation recommendation. It is also harder to act on, because it requires the organisation to hold a belief loosely and be willing to test it, which is not always the culture in marketing teams under short-term pressure.
Forrester’s work on go-to-market strategy challenges points to a similar tension in how organisations handle measurement uncertainty, particularly when the data does not confirm the direction leadership has already committed to.
Open-Source vs Vendor MMM: What the Shift Means
The release of Meta’s Robyn and Google’s Meridian as open-source MMM frameworks has changed the accessibility of this methodology significantly. Brands that previously needed to commission a specialist econometrics firm can now build models in-house at substantially lower cost, provided they have the data science capability to do it properly.
The trade-off is worth naming clearly. Lower cost does not mean lower complexity. Robyn and Meridian are sophisticated tools that require meaningful statistical expertise to configure, validate, and interpret correctly. A poorly configured open-source model is not better than a well-configured vendor model. It is just cheaper and more likely to produce outputs that look authoritative but are not.
The more interesting implication is that the democratisation of MMM has created a market for people who can interpret and challenge model outputs, not just build them. The analytical skill that matters most is not knowing how to run the regression. It is knowing when to trust the output and when to question it. That is a judgement skill, and it is rarer than the technical skill.
For teams building out this capability, resources on growth and measurement tooling can help frame where MMM sits alongside other analytical investments, even if the specific tools differ.
What Good Looks Like in Practice
The best use of MMM I have encountered was not in a large FMCG company with a dedicated econometrics team. It was in a mid-sized retail brand that had been running a broadly stable media mix for several years and used an MMM primarily to challenge their assumptions about TV. The model suggested their regional TV spend was significantly more efficient than their national TV spend, which was counterintuitive given the cost-per-thousand differential. They designed a holdout test in two regions, the results confirmed the model’s direction, and they restructured their TV investment accordingly. The model did not give them the answer. It gave them the right question.
That is the standard to hold it to. Not: “does this model tell me exactly where to put my money?” But: “does this model surface assumptions worth testing?” If the answer is yes, it is earning its place in your measurement stack. If the answer is that it is primarily being used to justify decisions already made, it is measurement theatre, and expensive measurement theatre at that.
Building a measurement approach that is honest about what it knows and what it does not is one of the more commercially valuable things a marketing team can do. It is also one of the harder things to sell internally, because uncertainty is uncomfortable and precise-looking numbers are reassuring even when they should not be. More thinking on the commercial framing of these decisions is available in the Go-To-Market and Growth Strategy section of The Marketing Juice.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
