MMM Model: What It Measures and Where It Falls Short

A Marketing Mix Model (MMM) is a statistical technique that uses historical sales and spend data to estimate the contribution of each marketing channel to business outcomes. It was developed decades before digital attribution existed, and it remains one of the few measurement approaches that can account for offline media, long-term brand effects, and factors that click-based tracking simply cannot see.

But MMM is not a silver bullet. It is a model, which means it is a simplified version of reality built on assumptions. Understanding what those assumptions are, and where they break down, is the difference between using MMM well and using it to confirm decisions you had already made.

Key Takeaways

  • MMM estimates channel contribution using regression on historical data, which means it reflects the past, not a prediction of what will happen if you change your mix today.
  • The model is only as good as the data fed into it. Missing channels, short time windows, and correlated spend patterns all distort outputs.
  • MMM and multi-touch attribution measure different things. Using both together gives a more complete picture than either does alone.
  • Diminishing returns curves are the most commercially useful output of a well-built MMM, not the channel contribution percentages.
  • MMM cannot measure what you have never spent on. It cannot tell you the ROI of a channel you have not tested.

I have spent 20 years watching marketing teams treat measurement frameworks as certainties rather than approximations. When I was running iProspect and managing hundreds of millions in ad spend across 30-plus industries, the most dangerous conversations were not the ones where we had no data. They were the ones where we had a model that looked authoritative and no one was questioning its inputs. MMM sits right in that zone. It is credible enough to drive budget decisions and complex enough that most stakeholders do not interrogate it properly.

What an MMM Model Actually Does

At its core, a marketing mix model runs a regression analysis. You feed it time-series data: weekly or monthly sales figures alongside spend data for each channel, and then a set of control variables that explain sales fluctuations that have nothing to do with marketing. Those control variables typically include things like price, distribution, seasonality, competitor activity, and macroeconomic conditions.

The model then estimates the coefficient for each marketing variable. That coefficient tells you how much a unit increase in spend on a given channel is associated with an increase in sales, holding everything else constant. From those coefficients, you can derive contribution percentages and, more usefully, response curves that show how returns change as spend increases or decreases.

The reason MMM has had a resurgence in recent years is straightforward. Cookie deprecation, iOS privacy changes, and the fragmentation of the consumer experience have all eroded the reliability of click-based attribution. If you cannot track a user across sessions and devices, last-click models become increasingly fictional. MMM sidesteps that problem entirely because it does not rely on individual-level tracking. It works at the aggregate level, which makes it privacy-safe by design.

For anyone building out a measurement framework, it is worth reading the broader context on marketing analytics to understand how MMM fits alongside other tools rather than replacing them.

The Data Requirements Nobody Warns You About

MMM sounds straightforward until you try to build one. The data requirements are significant and the quality bar is higher than most marketing teams expect.

You need at least two years of weekly data to build a model with statistical confidence. Three years is better. The reason is that you need enough variation in your spend patterns for the model to isolate the effect of each channel. If you have been spending roughly the same amount on TV every week for 18 months, the model cannot reliably estimate TV’s coefficient because there is no contrast to work with.

This is where I have seen MMM projects fall apart in practice. A brand comes in with 12 months of data, a relatively flat spend pattern, and three channels that are highly correlated because they all follow the same seasonal budget cycle. The model produces output that looks precise, with confidence intervals and R-squared values, but the underlying estimates are unreliable. The team presents it to the board anyway because it took six months and a significant consulting fee to produce.

The specific data requirements include:

  • Sales or revenue data at a consistent granularity (weekly is standard)
  • Spend data for every marketing channel, including channels you might consider minor
  • Impressions or GRPs for brand channels where spend alone does not capture exposure
  • Price and promotion data (discount depth, promotional frequency)
  • Distribution data if relevant (weighted distribution, shelf presence)
  • External factors: seasonality indices, competitor spend estimates where available, macroeconomic indicators

Missing any of these can cause the model to attribute their effects to the channels that happen to correlate with them. If you run TV every Christmas and Christmas is also your peak sales period, the model may overstate TV’s contribution if you have not properly controlled for seasonality.

How MMM Handles Adstock and Saturation

Two concepts are central to how MMM models media effects: adstock and saturation.

Adstock is the idea that advertising has a carryover effect. A TV ad seen this week does not just influence purchases this week. It creates a memory trace that decays over time and continues to influence behaviour in subsequent weeks. MMM models this by applying a decay function to the spend data before running the regression. The rate of decay varies by channel. TV typically has a longer carryover than paid search, which tends to have an almost immediate effect that dissipates quickly.

Saturation captures diminishing returns. The first pound spent on a channel generates more return than the hundredth pound spent on the same channel. MMM models this by transforming the spend variable using a function that flattens out at higher spend levels. The resulting response curve shows you where you are on the diminishing returns curve for each channel.

These two transformations, adstock and saturation, are applied before the regression runs. And here is the problem: the parameters that control these transformations are often set by the analyst or the modelling software, not derived purely from the data. The choice of decay rate and saturation function shape the output significantly. A model with a long TV adstock will attribute more sales to TV than the same model with a short adstock. If you are using an MMM vendor and you do not understand how they set these parameters, you do not fully understand what you are buying.

Where MMM Diverges From Attribution Models

MMM and multi-touch attribution (MTA) are often positioned as competitors. They are better understood as complementary frameworks that measure different things.

MTA works at the individual user level. It tracks touchpoints in a customer experience and assigns credit to each one. It is good at measuring the relative performance of digital channels against each other and at capturing the interaction effects between touchpoints. It is not good at measuring offline channels, brand effects, or anything that happens outside the trackable digital environment. The limitations of what standard analytics tools can capture are worth understanding in detail, and I have written about what data Google Analytics Goals cannot track which illustrates exactly where these gaps appear.

MMM works at the aggregate level. It cannot tell you anything about individual customer journeys. It cannot tell you which touchpoint in a sequence drove the conversion. But it can measure TV, out-of-home, sponsorship, and PR. It can measure long-term brand effects that play out over months. And it does all of this without requiring any user-level tracking.

The practical implication is that if you are running a business with significant offline media investment or a long consideration cycle, MMM gives you information that MTA simply cannot. If you are running a pure-play digital business with short purchase cycles, MTA may be more actionable on a week-to-week basis, but MMM still provides a useful cross-check on whether your attribution model is overstating the contribution of last-touch channels.

Understanding attribution theory in marketing gives you the conceptual grounding to know when to trust each framework and when to hold both with appropriate scepticism.

The Diminishing Returns Curve Is the Most Useful Output

Most MMM presentations lead with channel contribution percentages. “TV drove 34% of sales. Paid search drove 22%. Display drove 8%.” These numbers look authoritative and they get used in budget planning. They are also the output I trust least from a well-built MMM.

The reason is that contribution percentages are highly sensitive to the historical spend mix. They tell you what happened given the spend levels you actually ran. They do not tell you what would happen if you doubled TV spend or cut paid search by 40%. They are descriptive, not prescriptive.

The diminishing returns curves are where the real commercial value sits. These curves show you, for each channel, how incremental return changes as spend increases. From these curves, you can estimate the marginal ROI at your current spend level and identify where you are over-invested (past the point of diminishing returns) and where you might have room to increase spend profitably.

Early in my career at lastminute.com, I ran a paid search campaign for a music festival that generated six figures of revenue within roughly 24 hours from a relatively simple setup. At the time, we had no MMM. We had no sophisticated attribution. What we had was a clear read on marginal return: spend went in, revenue came out, and we could see the ratio in near real-time. That directness is what made paid search so compelling in its early days. MMM tries to replicate that clarity for channels where you cannot see the feedback loop so quickly. The diminishing returns curve is the closest thing to it.

Measuring incremental contribution is a recurring challenge across all channel types. The same rigour that applies to MMM applies to questions like how to measure affiliate marketing incrementality, where the risk of double-counting is just as real.

Bayesian MMM and Why It Has Changed the Field

Traditional MMM uses ordinary least squares regression. It estimates coefficients purely from the data. This works reasonably well when you have lots of data with good variation. It breaks down when you have sparse data, short time windows, or highly correlated channels.

Bayesian MMM takes a different approach. Instead of estimating coefficients purely from the data, it starts with prior beliefs about what the parameters should look like and then updates those beliefs based on what the data shows. The priors might come from industry benchmarks, previous model runs, or expert knowledge about how a particular channel behaves.

The practical advantages are significant. Bayesian models handle sparse data better because the priors provide a regularising effect that prevents the model from fitting noise. They produce full probability distributions for each parameter rather than point estimates, which means you get genuine uncertainty quantification. And they are more transparent about the assumptions baked into the model, because those assumptions have to be explicitly stated as priors.

Meta’s open-source Robyn and Google’s Meridian are both Bayesian MMM frameworks. Their release has democratised access to sophisticated MMM methodology in a way that would have been unthinkable five years ago. A team with solid data and Python or R capability can now build a credible MMM without a six-figure consulting engagement. That is genuinely useful. It is also a reason to be more careful about the quality of the inputs, because the barrier to producing output that looks credible has dropped significantly.

What MMM Cannot Tell You

There are structural limitations to MMM that no amount of methodological sophistication can overcome, and it is worth being explicit about them.

MMM cannot measure channels you have not spent on. If you have never run connected TV advertising, your MMM cannot tell you what the ROI of connected TV would be. The model can only work with historical variation. This is a significant constraint for brands considering entering new channels or making step-change shifts in their media mix.

MMM cannot capture individual-level behaviour. It cannot tell you that certain customer segments respond differently to certain channels. It cannot tell you that your best customers are being driven by brand search while your acquisition customers are coming through social. That granularity requires different tools.

MMM struggles with short-term tactical decisions. The model is built on historical patterns, typically at weekly granularity. It cannot tell you whether to increase your paid search bid on a Tuesday afternoon. For that kind of tactical optimisation, you need real-time performance data and, increasingly, tools that can process it at speed. Questions like how to measure the effectiveness of AI avatars in marketing or how to measure generative engine optimisation campaigns represent the frontier of measurement precisely because they involve new channels that have no historical data for an MMM to work with.

MMM also tends to undervalue digital channels, particularly paid search. Because paid search captures demand that was already there, its measured contribution in an MMM can look modest relative to its actual commercial importance. A brand that cuts paid search based on MMM output alone may find that demand does not disappear; it just goes to competitors. The model does not distinguish between demand creation and demand capture, and that distinction matters enormously for budget decisions.

Calibrating MMM With Geo Experiments

The most rigorous way to validate an MMM is to run geo experiments: controlled tests where you vary media activity in specific geographic markets while holding it constant in others, then compare outcomes. The experiment gives you a ground truth estimate of a channel’s causal effect, which you can use to check whether the MMM’s coefficient for that channel is in the right ballpark.

This approach is used by sophisticated advertisers and it meaningfully improves model reliability. It requires geographic variation in your media activity, which not all businesses can achieve. But for national advertisers with regional flexibility, geo holdout tests are one of the most valuable investments in measurement quality you can make.

The principle is the same as any good scientific practice: you do not just build a model, you test its predictions against reality. I have seen too many MMM projects where the model is built, the output is presented, and then nothing is done to validate whether the recommendations actually work when implemented. The model becomes the authority rather than a hypothesis to be tested.

This connects to a broader point about how measurement frameworks should sit within a commercial context. Tracking KPIs and understanding inbound marketing ROI requires the same discipline: the numbers are a starting point for decisions, not a substitute for them. A useful resource on building that discipline is Semrush’s guide to KPI metrics, which covers the principles of choosing metrics that actually connect to business outcomes.

How to Use MMM Output in Budget Planning

Assuming you have a reasonably well-built model with validated outputs, here is how I would use it in practice.

Start with the diminishing returns curves. For each channel, identify your current position on the curve. If you are spending well past the point of diminishing returns on one channel, that is the first place to look for reallocation. If another channel shows a steep response curve at your current spend level, that is a candidate for increased investment.

Use the model to run budget scenarios. Most MMM tools allow you to simulate the expected revenue outcome of different spend allocations. These simulations are not predictions. They are extrapolations from historical patterns. Treat them as directional rather than precise. If the model suggests that reallocating 15% of your TV budget to digital would increase revenue by 8%, the actual figure might be 4% or 12%. The direction is probably right. The magnitude is an estimate.

Cross-reference with your other data sources. If the MMM says paid social has a low ROI but your platform data shows strong conversion rates, investigate the discrepancy before acting on either number. The gap between what attribution tools report and what MMM estimates is often where the most interesting measurement questions live. Resources like Mailchimp’s overview of marketing metrics and HubSpot’s email marketing reporting guide are useful for understanding how channel-specific metrics feed into a broader measurement picture.

Test before you fully commit. If the model recommends a significant budget shift, run a test at smaller scale first. Geo experiments are ideal for this. A three-month test in a subset of markets costs less than a full budget reallocation that turns out to be wrong.

When I was building teams at iProspect and managing large client budgets, the discipline I valued most was not the sophistication of the tools. It was the habit of asking “what would have to be true for this number to be right?” That question applies to MMM output as much as it applies to anything else. The model is a perspective on reality. Your job is to decide how much weight to give it.

The Practical Checklist Before You Commission an MMM

Before you spend time and budget on an MMM project, work through these questions honestly.

Do you have at least two years of weekly data with meaningful variation in your spend patterns? If not, the model will struggle to produce reliable estimates. You may be better served by investing in geo experiments first to build the data foundation.

Do you have complete spend data across all channels, including offline? A model that includes paid search and social but excludes TV or sponsorship will attribute the effects of those excluded channels to whatever is correlated with them. Incomplete data produces biased output.

Do you have a plan for what you will do with the output? MMM projects that end with a presentation deck and no action are expensive ways to feel like you have done something rigorous. The value is in the budget decisions that follow, not in the model itself.

Do you understand the methodology well enough to challenge it? You do not need to be able to build the model yourself. But you need to understand what adstock decay rates were assumed, how saturation was modelled, and what control variables were included. If the vendor cannot explain these things clearly, that is a problem.

Have you budgeted for validation? A model without validation is a hypothesis without a test. Plan for at least one geo experiment to check whether the model’s predictions hold in the real world.

There is a broader point here about the relationship between measurement and decision-making. The goal is not perfect measurement. It is honest approximation that is good enough to make better decisions than you would make without it. MMM, used well, clears that bar. Used badly, it provides false precision that is worse than no model at all.

If you are building out a more comprehensive analytics capability, the full range of topics covered in marketing analytics at The Marketing Juice covers everything from channel measurement to attribution frameworks to the tools that underpin modern measurement practice.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

How much data do you need to build a reliable MMM model?
Most practitioners recommend a minimum of two years of weekly data, with three years being preferable. More important than the volume of data is the variation within it. If your spend has been relatively flat across all channels for the entire period, the model will struggle to isolate the effect of each channel even with three years of data. Meaningful variation in spend levels, ideally not all channels moving in the same direction at the same time, is what gives the regression something to work with.
What is the difference between MMM and multi-touch attribution?
MMM works at the aggregate level using time-series data. It can measure offline channels, brand effects, and long-term carryover, but it cannot tell you anything about individual customer journeys. Multi-touch attribution works at the individual user level, tracking touchpoints across a customer experience and assigning credit to each one. It is more granular for digital channels but cannot measure offline media and relies on individual-level tracking that is increasingly restricted by privacy changes. The two approaches are complementary rather than competing.
Why does MMM often undervalue paid search?
Paid search largely captures demand that already exists rather than creating new demand. When a user searches for your brand or product and clicks a paid result, they were probably going to buy regardless of whether the ad was there. MMM measures the correlation between paid search spend and sales, but if demand was already present, the incremental contribution of the paid click is lower than it appears. This does not mean paid search has no value, but it does mean that cutting it based on a low MMM coefficient can send that demand to competitors rather than eliminate it.
What is Bayesian MMM and how does it differ from traditional MMM?
Traditional MMM uses ordinary least squares regression, estimating coefficients purely from the data. Bayesian MMM incorporates prior beliefs about parameter values, updating them based on what the data shows. This makes Bayesian models more strong when data is sparse or channels are highly correlated, and it produces full probability distributions rather than point estimates, which gives you genuine uncertainty quantification. Frameworks like Meta’s Robyn and Google’s Meridian have made Bayesian MMM accessible without requiring a large consulting engagement.
How do you validate that an MMM model is producing reliable output?
The most rigorous validation approach is geo experiments: controlled tests where you vary media activity in specific geographic markets while holding it constant in others. The experiment gives you a causal estimate of a channel’s effect that you can compare against the model’s coefficient. Statistical fit metrics like R-squared tell you how well the model explains historical data but do not tell you whether the model’s causal estimates are correct. Out-of-sample forecasting, where you hold back a portion of your data and test whether the model predicts it accurately, is another useful validation method.

Similar Posts