Media Mix Modeling: What It Tells You and What It Doesn’t

Media mix modeling is a statistical technique that estimates the contribution of each marketing channel to a business outcome, typically revenue or sales, using historical data and regression analysis. It gives marketers a way to see how budget allocation across channels has performed over time, and what might happen if that allocation changes. Done well, it is one of the more honest tools in the measurement toolkit. Done badly, it produces confident-looking numbers that are wrong in ways that are very hard to detect.

Key Takeaways

  • Media mix modeling estimates channel contribution using historical data, not real-time signals. It tells you what happened, not what is happening.
  • The quality of an MMM output depends almost entirely on the quality and completeness of the data going in. Garbage in, confident garbage out.
  • MMM works best when combined with other measurement approaches. Treated as the only source of truth, it will mislead you.
  • Most MMM projects fail not because of the statistics, but because of how the outputs are interpreted and acted on by people who did not build the model.
  • The practical value of MMM is not precision. It is directional clarity about where your budget is and is not working at scale.

I have sat in a lot of rooms where someone has presented an MMM output as settled truth. The charts look authoritative. The ROI numbers are specific to two decimal places. And the recommendation is always some version of “shift budget here, cut budget there.” What those rooms rarely contain is an honest conversation about what the model cannot see, what assumptions were baked in, and what would have to be true for the recommendation to be correct. That conversation is the one worth having.

What Media Mix Modeling Actually Does

At its core, MMM is a regression model. You feed it historical data on your marketing spend by channel, alongside data on external factors like seasonality, pricing, economic conditions, and competitor activity, and it attempts to isolate how much each variable contributed to your outcome metric. The output is a set of coefficients that represent the estimated effect of each channel on sales or revenue.

The technique has been around since the 1960s and was used extensively by large consumer goods companies long before digital advertising existed. Back then, the channels were television, print, and radio, and the data was relatively clean. You knew what you spent, you knew what aired, and you had decent sales data. The modeling was still imperfect, but the inputs were manageable.

Modern MMM is considerably more complicated. You might be modeling fifteen or twenty channels simultaneously, some of which are highly correlated with each other. You are trying to account for the interaction between channels. You are working with data that has different granularities, different lag effects, and different levels of reliability. The statistical machinery has improved, but the fundamental challenge of attribution, figuring out what caused what in a complex system, has not gone away.

If you want a broader grounding in how analytics tools handle attribution and measurement more generally, the Marketing Analytics hub covers the landscape from GA4 through to measurement strategy. MMM sits within that wider picture, and it makes more sense when you understand where it fits relative to other approaches.

Why MMM Has Come Back Into Fashion

Why MMM Has Come Back Into Fashion

For a while, media mix modeling felt like a legacy approach. Digital advertising brought with it click-based attribution, last-touch models, and eventually multi-touch attribution. Marketers could see, or thought they could see, exactly which ads led to which conversions. MMM felt slow and backward-looking by comparison.

Then a few things happened. Privacy regulation tightened. Third-party cookies started disappearing. iOS changes reduced the signal quality in mobile advertising significantly. And marketers who had been relying on click-based attribution began to notice that the numbers they were seeing in their ad platforms did not add up when you looked at actual business performance. The sum of attributed conversions across platforms was often two or three times the actual number of sales.

I saw this problem clearly when I was running agency operations and managing large ad budgets across multiple channels. Each platform would claim credit for a conversion, and if you added up all the claimed credit, you were looking at a number that bore no resemblance to the client’s actual revenue. It was not fraud exactly. It was the predictable result of every platform measuring its own contribution using its own methodology, with no coordination and no shared ground truth. The client was paying for a story, not a measurement.

MMM sidesteps this problem because it does not rely on individual-level tracking. It looks at aggregate spend and aggregate outcomes over time and uses statistical methods to estimate relationships. That makes it much more privacy-resilient than cookie-based attribution, and it means the numbers are not being generated by the same platforms that have a financial interest in looking good.

Meta, Google, and others have all released their own MMM toolkits in recent years. Robyn from Meta and Meridian from Google are both open-source frameworks that lower the technical barrier to running these models. Whether that is a good thing depends on whether the people running the models understand what they are doing. A tool that makes it easier to produce a number does not automatically make it easier to produce a correct number.

What the Model Cannot See

This is where most MMM conversations go wrong. The model produces outputs that look precise. People treat those outputs as accurate. The distinction matters enormously.

The first problem is data completeness. MMM can only account for variables it has been given. If you have not included competitor spend data, the model cannot account for competitive effects. If you have not included distribution changes, pricing shifts, or product launches, those effects will be absorbed into the channel coefficients in ways that distort the results. A television campaign that ran during a period when you also launched a new product in 3,000 additional stores will look more effective than it was, because the model has no way to separate the two effects.

The second problem is multicollinearity. If your channels move together, which they often do because budgets are planned in annual cycles and tend to go up and down in parallel, the model struggles to separate their individual contributions. The coefficients become unstable. Small changes in the data can produce large changes in the output. This is a genuine statistical problem, not a failure of the software.

The third problem is lag and saturation curves. MMM typically uses an adstock transformation to model the delayed and diminishing effects of advertising. The shape of that transformation, how quickly the effect decays, how quickly it saturates, has to be specified or estimated. If those parameters are wrong, the ROI estimates will be wrong. And because you cannot directly observe the true adstock effect, you cannot easily verify whether your parameters are reasonable.

The fourth problem is that MMM is backward-looking. It tells you what worked over the historical period you modeled. If the media landscape has changed, if audience behaviour has shifted, if a channel’s cost structure has moved, the historical relationships may not hold going forward. The model has no way to know that.

I once sat through a presentation where an MMM had been built on three years of historical data, with the most recent year being 2019. The recommendation was to shift significant budget into out-of-home advertising based on strong historical ROI. The year was 2021. The model had no awareness of what had happened in between. The analyst presenting it had not flagged this as a limitation. The client nearly acted on it.

How to Read an MMM Output Without Being Misled

The most important thing you can do when reviewing an MMM output is ask about the confidence intervals, not just the point estimates. An ROI of 3.2x sounds specific. An ROI of 3.2x with a 90% confidence interval of 0.8x to 6.1x tells a very different story. If the model cannot distinguish between “this channel broke even” and “this channel returned six times what we spent,” the output should not be driving budget decisions on its own.

Ask what variables were included and what was left out. Any competent analyst should be able to give you a clear list of the variables in the model and an honest account of what could not be included and why. If the answer is defensive or vague, treat the output with caution.

Ask how the model was validated. The standard approach is to hold out a portion of the historical data, build the model on the remainder, and then test whether the model’s predictions match the held-out actuals. If validation was not done, or if the model performed poorly on the validation data, the outputs are not reliable.

Ask what assumptions were made about adstock and saturation. These are not minor technical details. They are central to the ROI estimates. If the analyst cannot explain the assumptions clearly, they may not understand them well enough to have made them correctly.

And ask what the model would have to get wrong for the recommendation to be incorrect. This is the question that separates analysts who understand their work from those who are presenting outputs they do not fully understand. A good analyst will have a ready answer. A poor one will be surprised you asked.

MMM in Practice: What It Is Good For

Despite its limitations, MMM is genuinely useful. The problem is not the technique. It is the expectation that the technique will provide certainty rather than direction.

Where MMM performs well is in answering broad strategic questions about budget allocation at scale. If you are spending tens of millions across ten or fifteen channels and you want to know whether your overall mix is roughly sensible, MMM can give you a useful directional read. It can tell you if a channel appears to be saturated. It can tell you if a channel that receives very little budget might be underinvested. It can help you understand the relative contribution of brand-building versus performance channels over time.

It is less useful for granular tactical decisions. It will not tell you whether to bid higher on branded search terms. It will not tell you which creative is working. It will not give you reliable channel-level ROI numbers precise enough to justify cutting a channel entirely based on the model alone.

The marketers I have seen use MMM well treat it as one input among several. They run it alongside incrementality testing, where they run controlled experiments to measure the true causal effect of specific channels or campaigns. They use it alongside platform data, while remaining appropriately sceptical of platform-reported attribution. And they use it alongside their own commercial judgment, informed by understanding the business, the market, and the competitive context.

When I was scaling a performance marketing operation from a small team to something approaching a hundred people, one of the things I learned is that measurement confidence is not the same as measurement accuracy. You can be very confident in a number that is wrong. The discipline is in knowing which numbers to trust, which to treat as directional, and which to discard. MMM outputs almost always belong in the middle category.

Understanding how users move through your digital properties is part of the same challenge. Tools like GA4 give you a different lens on behaviour, and Moz’s breakdown of GA4 features worth paying attention to is worth reading if you are trying to connect platform analytics to the broader measurement picture. MMM and GA4 are answering different questions, but they are both trying to tell you something about what is working.

The Incrementality Question MMM Cannot Fully Answer

One of the most important questions in marketing measurement is not “what did this channel contribute” but “what would have happened without it.” These are related but not identical questions. MMM attempts to answer the first. The second requires a counterfactual, and counterfactuals are hard.

Consider paid search. MMM will typically show paid search with a high ROI, because paid search spend correlates strongly with revenue. But a significant portion of paid search, particularly branded search, captures demand that already existed. The person who searched for your brand name and clicked your paid ad was probably going to find you anyway. The incremental contribution of that click may be much lower than the attributed revenue suggests.

MMM can partially account for this if the model is built carefully and if you have enough variation in your branded search spend over time to identify the relationship. But in practice, many MMM implementations overstate the ROI of channels that capture existing demand and understate the ROI of channels that create new demand, particularly upper-funnel brand activity where the effect is diffuse and delayed.

This is one reason why thinking carefully about which metrics you are actually trying to move matters before you build or commission an MMM. If your model is optimising for short-term revenue attribution and your business actually needs long-term brand equity, the model will push you in the wrong direction, confidently.

Incrementality testing, running geo-based holdout experiments or time-based blackout tests, is the more direct way to answer the counterfactual question. It is also more expensive and operationally complex. The best measurement programmes use both approaches and triangulate between them rather than relying on either alone.

The Organisational Problem With MMM

There is a version of this problem that has nothing to do with statistics. It is about what happens when an MMM output lands in an organisation that is not equipped to use it well.

MMM projects are typically expensive and time-consuming. By the time the output arrives, there is significant organisational pressure to act on it. The budget has been spent. The analysts have presented. The leadership team is expecting a recommendation. In that environment, the nuance tends to get lost. The confidence intervals get dropped. The caveats get footnoted. And the recommendation, “shift 15% of budget from television to paid social,” gets treated as a finding rather than an estimate.

I have judged marketing effectiveness work at the Effie Awards and spent time reviewing how brands account for their results. One pattern that appears repeatedly is the gap between the rigour of the analysis and the confidence of the conclusions drawn from it. Measurement tools produce outputs. Humans decide what those outputs mean. That second step is where most of the error enters.

The organisations that use MMM well have built a culture where “this is directionally useful but uncertain” is an acceptable answer. They do not require false precision to make decisions. They have senior leaders who understand that marketing measurement is probabilistic, not deterministic, and who make decisions accordingly. That is rarer than it should be.

If you want to understand how this connects to broader measurement planning and the frameworks that support good decision-making across analytics, the Marketing Analytics section of The Marketing Juice covers the thinking behind building measurement systems that are honest about their own limitations. MMM is a powerful tool, but it works best inside a measurement culture that knows what questions to ask of it.

Building or Commissioning an MMM: What to Get Right

If you are considering an MMM project, either building one internally or commissioning an agency or vendor to run one, there are a few things worth getting right from the start.

Data quality is the most important variable. Before you spend anything on modeling, audit your historical spend data. Is it complete? Is it consistent? Are there gaps? Are the channel definitions stable over time, or have they changed in ways that make historical comparisons unreliable? Spend data that looks complete often has significant holes when you look closely. Channels get renamed. Reporting methodologies change. Agencies report things differently than in-house teams. A model built on inconsistent data will produce inconsistent outputs, and you will not necessarily be able to tell from the output itself that the data was the problem.

Define the outcome variable carefully. Revenue is the obvious choice, but it may not be the right one for your business. If you have long sales cycles, revenue may lag marketing activity by months, making the relationship hard to model. If you sell through retail partners, you may not have clean revenue data at all. Proxy metrics like leads, website conversions, or retail sell-through data can work, but they introduce their own assumptions about the relationship between the proxy and the actual business outcome.

Think about what external variables need to be included. Seasonality is the obvious one, but also consider pricing history, distribution changes, product launches, and any significant external events that affected your category. The more complete your variable set, the more confident you can be that the channel coefficients are capturing actual channel effects rather than absorbing other influences.

Plan for ongoing use rather than a one-time project. An MMM built on data from two years ago is of limited use today. The value of MMM comes from updating it regularly, ideally quarterly, so that the model reflects the current media environment and the current state of your business. A one-time project produces a one-time answer. A maintained model produces a running view of how your mix is performing.

And be honest about what you are going to do with the output before you build it. If the organisation is not prepared to act on a finding that challenges current budget allocations, the project may not be worth running. I have seen MMM outputs that clearly showed a channel underperforming get quietly shelved because the channel in question was managed by a senior leader who did not want to hear it. That is an organisational problem, not a measurement problem. But it is worth knowing in advance whether the measurement will actually be used.

For those thinking about how to track performance across channels at a more granular level, building custom reports in GA4 can give you channel-level visibility that complements what an MMM tells you at the aggregate level. The two approaches answer different questions, and having both in your toolkit makes the overall picture more complete.

Understanding your conversion data is also foundational. The evolution of conversion tracking in paid search is a useful reminder of how measurement methodology has changed over time, and why historical comparisons across periods with different tracking approaches need to be handled carefully in any MMM project.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is media mix modeling used for?
Media mix modeling is used to estimate the contribution of each marketing channel to a business outcome, typically revenue or sales, using historical spend data and statistical regression. It helps marketers understand which channels are driving results at the aggregate level and informs decisions about how to allocate budget across the mix. It is most useful as a directional tool for large-scale budget planning rather than a precise guide to tactical decisions.
How is media mix modeling different from multi-touch attribution?
Multi-touch attribution assigns credit for conversions to individual touchpoints in a customer’s experience, using individual-level tracking data. Media mix modeling works at the aggregate level, using historical spend and outcome data without requiring individual user tracking. MMM is more privacy-resilient and better suited to measuring channels like television or out-of-home that do not generate individual-level click data. Multi-touch attribution gives more granular, near-real-time insight but is heavily dependent on tracking quality and is vulnerable to the limitations of cookie-based measurement.
How much data do you need to run a media mix model?
Most practitioners recommend at least two to three years of weekly data as a starting point, though more data generally produces more stable results. The critical factors are not just volume but consistency and completeness. You need spend data that is broken out by channel in a consistent way over the full period, alongside reliable outcome data and ideally a set of external variables covering seasonality, pricing, and any significant business events. Gaps or inconsistencies in historical data are one of the most common reasons MMM outputs are unreliable.
Can small businesses use media mix modeling?
In most cases, no. Media mix modeling requires sufficient historical data, meaningful variation in spend across channels over time, and enough statistical signal to separate channel effects. Businesses with small budgets, few channels, or limited historical data will not have enough variation for the model to produce reliable results. For smaller businesses, simpler approaches such as incrementality testing, channel-level holdout experiments, or careful use of platform analytics are likely to be more practical and more reliable.
How often should a media mix model be updated?
Quarterly updates are a reasonable target for most businesses running active MMM programmes. The media landscape, audience behaviour, and competitive environment all change over time, and a model built on data that is a year or more old may no longer reflect current channel performance. Some organisations run continuous or rolling models that update as new data becomes available. The minimum is to refresh the model when there has been a significant change in the business, the media mix, or the external environment that would invalidate the historical relationships the model was built on.

Similar Posts