Media Mix Modelling: What It Can and Cannot Tell You
Media mix modelling is a statistical method that estimates the contribution of different marketing channels to a business outcome, typically revenue or sales. It uses historical data to build a regression model that attributes performance across paid media, owned channels, and external variables like seasonality and pricing. Done well, it gives you a defensible view of where your budget is working. Done poorly, it gives you a very expensive way to confirm your existing assumptions.
I want to be honest about what this technique is and is not, because the gap between how it gets sold and how it actually performs in practice is significant.
Key Takeaways
- Media mix modelling estimates channel contribution using historical data, but the quality of the output depends entirely on the quality of the data going in.
- MMM is a macro-level planning tool. It is not designed to optimise individual campaigns in real time, and treating it as one leads to poor decisions.
- The model reflects the past. If your market, competitive set, or media mix has shifted significantly, historical patterns may not hold.
- MMM and attribution modelling answer different questions. Using both together, with clear understanding of what each does, gives you a more complete picture than either alone.
- The biggest risk with media mix modelling is not a technical failure. It is a governance failure, where the model output gets treated as fact rather than as one informed perspective.
In This Article
- Why Media Mix Modelling Has Come Back Into Focus
- How Does Media Mix Modelling Actually Work?
- What Data Do You Actually Need?
- MMM Versus Attribution Modelling: Different Questions, Not Competing Answers
- The Diminishing Returns Curve Is the Most Useful Output
- Where Media Mix Modelling Gets Oversold
- How to Commission or Evaluate an MMM Project
- The Governance Question Nobody Asks
Why Media Mix Modelling Has Come Back Into Focus
There was a period, roughly the mid-2010s, when media mix modelling felt like something large consumer goods companies did and everyone else ignored. Digital attribution had arrived, last-click was everywhere, and the promise of granular channel-level data made the slow, expensive process of building an econometric model feel unnecessary.
Then the tracking environment changed. Third-party cookies started disappearing. iOS privacy updates reduced the signal available to pixel-based attribution. Walled gardens like Meta and Google became less willing to share the data that multi-touch attribution models depend on. Suddenly, the limitations of digital attribution became impossible to ignore, and MMM started looking a lot more attractive again.
I spent years managing performance budgets across dozens of clients, and the attribution conversation was always uncomfortable. The numbers looked clean. The dashboards looked confident. But when you dug into the methodology, you found that most of what we called attribution was actually just credit allocation, often biased toward the channels that were easiest to measure. MMM does not solve that problem entirely, but it approaches it differently, and that difference matters.
If you want broader context on how measurement fits into a marketing analytics programme, the Marketing Analytics hub at The Marketing Juice covers the full landscape, from data foundations through to reporting and channel-level analysis.
How Does Media Mix Modelling Actually Work?
At its core, MMM is a multivariate regression. You take a dependent variable, usually weekly or monthly revenue or sales volume, and you regress it against a set of independent variables that include your marketing spend by channel, plus control variables for things like price changes, distribution, competitive activity, and macroeconomic conditions.
The model estimates a coefficient for each marketing variable, which represents the incremental contribution of that channel to the outcome. From those coefficients you can calculate return on investment by channel, identify diminishing returns curves, and model what would happen to revenue if you shifted budget between channels.
Two technical adjustments are worth understanding even if you are not building the model yourself. First, adstock transformation: marketing effects do not disappear the week you stop spending. A TV campaign creates awareness that decays over time. Adstock models that decay rate, so the model accounts for the lagged effect of past spend rather than treating each week in isolation. Second, saturation curves: most channels exhibit diminishing returns at higher spend levels. The model should account for the fact that your first pound of TV spend is more efficient than your ten-thousandth.
Bayesian MMM has become more common in recent years, particularly since Google released its open-source Meridian framework and Meta released Robyn. Bayesian approaches allow you to incorporate prior beliefs about channel performance, which is useful when you have limited historical data or when you want to constrain the model to avoid implausible outputs. They also produce probability distributions rather than point estimates, which gives you a more honest picture of uncertainty.
What Data Do You Actually Need?
This is where most MMM projects run into trouble before they have produced a single output. The data requirements are more demanding than most marketing teams expect, and the quality bar is higher than most data warehouses actually meet.
You need at minimum two years of weekly data to have enough observations to estimate reliable coefficients, particularly if your business has strong seasonality. You need spend data by channel at a consistent granularity. You need your outcome variable, which should be as close to actual business performance as possible, not a proxy metric. And you need your control variables: pricing data, distribution or availability data, any significant external events that affected your category.
The control variables are where I see the most shortcuts taken. Teams build models with spend and revenue and call it done. But if your prices changed significantly over the period, or if a competitor entered or exited the market, or if you had a supply chain disruption that affected availability, and none of that is in the model, the model will misattribute those effects to your marketing channels. The output will look plausible. It will not be accurate.
When I was working with a retail client on a budget reallocation exercise, we discovered midway through the data preparation that their promotional pricing data had been stored inconsistently across three different systems, with different definitions of what counted as a promotion. That single data quality issue delayed the project by six weeks and changed the model outputs materially once it was resolved. The version built on the messy data would have recommended cutting TV spend significantly. The corrected version showed TV was actually one of the more efficient channels once promotional effects were properly isolated.
MMM Versus Attribution Modelling: Different Questions, Not Competing Answers
One of the persistent confusions I encounter is the framing of MMM versus attribution as an either/or choice. They are not alternatives. They answer fundamentally different questions at different levels of granularity.
Attribution modelling, whether rules-based or data-driven, works at the individual user experience level. It tries to understand which touchpoints in a specific customer’s path to conversion deserve credit. It is useful for tactical optimisation: which ad creative is performing, which audience segment converts better, how different placements compare within a channel. Understanding the full scope of the attribution challenge is worth its own article, and the Semrush guide to KPI metrics covers some of the measurement foundations that underpin both approaches.
MMM works at the aggregate level. It does not care about individual journeys. It looks at population-level inputs and outputs over time and estimates the macro contribution of each channel. It is useful for strategic planning: how should I allocate next year’s budget across TV, paid search, social, and out-of-home?
The problem is that the two approaches often produce different numbers for the same channel, and that creates organisational tension. Paid search almost always looks more efficient in attribution models than in MMM, because attribution models capture the conversion at the point of search, which is the end of a buying process that other channels helped create. MMM is more likely to show the upper-funnel channels contributing meaningfully to that demand, even though they are invisible in the attribution data.
I have sat in budget meetings where the paid search team and the brand team were essentially arguing from different measurement systems without realising it. The paid search team had attribution data showing exceptional return on ad spend. The brand team had MMM data showing that brand awareness was a significant driver of search volume. Both were correct. Neither was the complete picture.
The Diminishing Returns Curve Is the Most Useful Output
Most discussions of MMM focus on the channel-level ROI comparison, which channel is most efficient, which should get more budget. That is useful, but it is not the most actionable output the model produces.
The diminishing returns curve for each channel tells you something more specific: at your current spend level, are you on the efficient part of the curve or the saturated part? Two channels can have the same average ROI but very different marginal ROI at the current spend level. The channel where you are spending heavily and sitting in the flat part of the saturation curve is the one to cut. The channel where you are spending lightly and sitting on the steep part of the curve is the one to increase.
This is where budget optimisation becomes a genuine analytical exercise rather than a gut-feel conversation. The model gives you a defensible basis for saying: if we move X from channel A to channel B, we expect Y incremental revenue. That is not a guarantee. It is a calibrated estimate based on historical patterns. But it is substantially better than the alternative, which is allocating budget based on whoever argued most convincingly in the last planning meeting.
For teams building out their KPI reporting infrastructure alongside MMM, the Semrush guide to KPI reporting is a practical reference for structuring how model outputs feed into regular business reviews.
Where Media Mix Modelling Gets Oversold
I have seen MMM sold as a solution to the attribution problem. It is not. It is a different tool with different limitations. Being clear about those limitations is not pessimism. It is the only way to use the tool correctly.
First, MMM reflects the past. The model is built on historical data, and it assumes that the relationships between your marketing inputs and business outputs are relatively stable. If your competitive environment has shifted, if you have entered a new market, if consumer behaviour has changed structurally, the historical coefficients may not be a reliable guide to future performance. This is not a fatal flaw, but it means you need to update models regularly and treat outputs as directional rather than definitive.
Second, MMM cannot easily measure channels with limited variation in spend. If you have run paid search at roughly the same budget for three years, the model has very little data to work with when estimating the marginal effect of changing that budget. You need variation in your spend data for the model to learn from it. This is one reason why incrementality testing, running controlled experiments where you vary spend in specific geographies or time periods, is a useful complement to MMM rather than a replacement for it.
Third, the model is only as good as the variables you include. If there is a significant driver of your business performance that is not in the model, the model will misattribute its effects. This is the garbage-in-garbage-out problem, and it is more common than the vendors of MMM platforms typically acknowledge.
Fourth, and this is the one I feel most strongly about: MMM outputs require interpretation. The model produces numbers. A human has to decide what those numbers mean in the context of the business, the market, and the strategy. I have seen organisations take model outputs and implement the budget recommendations mechanically, without asking whether the historical period was representative, without checking whether the control variables were complete, without considering whether the business context had changed. That is not using a model. That is outsourcing judgement to a spreadsheet.
How to Commission or Evaluate an MMM Project
If you are considering commissioning an MMM project, either from an external provider or building it internally, there are a few questions worth asking before you start.
What is the actual business decision this model needs to inform? If the answer is vague, the model will be vague. MMM is most useful when it is scoped to answer a specific strategic question, typically around budget allocation or channel investment levels. If you are building it because it seems like something you should have, you will get an output that sits in a slide deck and influences nothing.
What data is available, and what is its quality? Before engaging any external provider, do an honest audit of your data. How far back does it go? Is spend data available at weekly granularity by channel? Do you have pricing and promotional data? Have there been significant structural changes to the business in the historical period? The answers to these questions will determine whether MMM is feasible and how much data preparation work is required.
How will the outputs be validated? A good MMM provider should be able to show you how well the model fits historical data, how sensitive the outputs are to different modelling assumptions, and how the results compare to any incrementality tests or other measurement sources you have. If a provider is not willing to show you the uncertainty ranges around their estimates, that is a warning sign.
Who owns the model going forward? MMM is not a one-time project. Markets change, media mixes change, and a model built on data from two years ago may give you misleading outputs today. The organisations that get the most value from MMM treat it as an ongoing capability, refreshing the model periodically and using it as one input into a continuous planning process.
For teams connecting MMM outputs to their broader analytics and reporting stack, tools like Tableau via Sprout Social can help bridge the gap between model outputs and the dashboards that planning teams actually use day to day. And if you are thinking about how content and organic channels fit into the measurement picture, Moz’s piece on using GA4 data to inform content strategy is a useful reference for bringing those channels into a more unified measurement view.
The Governance Question Nobody Asks
When I judged the Effie Awards, one of the things that struck me about the entries that demonstrated genuine marketing effectiveness was how clearly they connected measurement to decision-making. Not just “we had a model” but “the model told us X, we changed Y, and we can show the outcome.” That chain of reasoning, from measurement to decision to result, is surprisingly rare in practice.
The governance question for MMM is: who has the authority to act on the outputs, and what process exists for challenging the model when the outputs conflict with other evidence? Without clear answers to those questions, even a technically excellent model will sit unused or, worse, will be selectively cited when it supports a decision that has already been made for other reasons.
I have watched organisations commission expensive MMM projects and then spend the next six months arguing about whether the outputs were valid, because there was no agreed framework for how the model would be used before it was built. The model became a political document rather than an analytical tool. The budget allocation that resulted had less to do with the model outputs than with who had the most influence in the room.
Good measurement governance means agreeing in advance what questions the model is designed to answer, what level of confidence is required before acting on the outputs, and how the model will be updated as new data becomes available. It means treating the model as one input alongside business judgement, market knowledge, and qualitative insight, rather than as an oracle that replaces thinking.
Measurement, at every level of sophistication, only creates value when it changes decisions. If you want to think through how analytics fits into the broader commercial picture, the full range of topics in the Marketing Analytics section of The Marketing Juice covers everything from foundations to channel-level measurement in practical terms.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
