Media Mix Modeling: What It Does to Your Media Plan

Media mix modeling improves media planning efficiency by quantifying the revenue contribution of each channel in your media plan, so you can reallocate budget toward what is genuinely driving growth and away from what is not. Rather than relying on last-click attribution or platform-reported metrics, it builds a statistical model using historical spend, sales, and external variables to estimate the true impact of each channel on business outcomes.

The result is a cleaner, more defensible basis for budget decisions. You stop optimizing toward the channels that look good in dashboards and start optimizing toward the channels that actually move revenue.

Key Takeaways

  • Media mix modeling uses historical data to estimate each channel’s true revenue contribution, independent of platform-reported metrics.
  • It accounts for external factors like seasonality, price changes, and economic conditions that attribution tools routinely ignore.
  • The output is a set of response curves that show diminishing returns per channel, which makes budget reallocation a data-led decision rather than a gut call.
  • MMM works best when run alongside other measurement approaches. It is a strong signal, not a complete answer.
  • The biggest efficiency gains come not from the model itself but from the discipline of actually acting on what it tells you.

Why Media Planning Still Gets This Wrong

I spent years watching media plans get built the same way: take last year’s allocation, adjust slightly based on what the platforms told us was working, and defend it in a deck full of impressions and click-through rates. It felt rigorous. It was not.

The problem is structural. Most media planning relies on attribution data that is collected by the same platforms being measured. Google tells you Google is working. Meta tells you Meta is working. They are not lying exactly, but they are measuring their own contribution in a way that consistently overstates it. When I was running large-scale paid campaigns across multiple channels, the overlap between what each platform claimed as a conversion was significant. The sum of all attributed conversions routinely exceeded total actual sales. That is not measurement. That is accounting fiction.

Media mix modeling was developed to solve this. It sits outside the platforms entirely, using your own first-party business data as the source of truth. That independence is the whole point.

If you are building out your broader measurement approach, the Marketing Analytics and GA4 hub covers the full landscape, from data infrastructure to attribution frameworks to incrementality testing.

What Does Media Mix Modeling Actually Measure?

At its core, MMM is a regression analysis. You feed it a time series of your marketing spend by channel, your sales or revenue data, and a set of external variables that affect demand independently of your marketing. The model then estimates the coefficient for each input, which tells you how much each unit of spend in each channel contributed to the output variable.

The external variables are where most people underestimate the sophistication required. Price changes, competitor activity, macroeconomic conditions, weather, seasonality and distribution changes all affect sales. If you do not control for them, your model will misattribute their effects to your marketing channels. A spike in sales during a heatwave is not evidence that your display campaign worked. A well-built MMM separates those signals.

The model also accounts for two effects that standard attribution cannot handle at all. The first is adstock, which captures the carry-over effect of advertising: a TV campaign you ran three weeks ago is still influencing purchases today. The second is saturation, which captures diminishing returns: the first pound you spend on a channel generates more return than the hundredth. Both of these are baked into the model’s structure, which is what makes its output genuinely useful for planning.

How Response Curves Change the Budget Conversation

The most practically useful output from an MMM is not the headline contribution percentages. It is the response curves.

A response curve plots the relationship between spend and return for each channel at different investment levels. It shows you where a channel starts to saturate, meaning where additional spend produces progressively less incremental revenue. That information transforms the budget allocation conversation entirely.

Instead of arguing about which channels “feel” like they are working or defending historical allocations because they are familiar, you can have a specific conversation about the marginal return on the next ten thousand pounds of spend. If paid search is saturated and organic social still has headroom, the model tells you that directly. If TV is delivering strong base sales contribution but you have been underinvesting in it because it is hard to attribute, the model surfaces that too.

When I was leading the growth of a performance marketing agency from a team of around twenty people to over a hundred, one of the persistent challenges was convincing clients to shift budget away from channels that looked efficient in their dashboards toward channels that were actually driving more revenue. The dashboard metrics were not wrong exactly, they just measured the wrong thing. Response curves from an MMM give you the language and the evidence to have that conversation without it becoming a debate about opinions.

What Efficiency Gains Actually Look Like in Practice

The efficiency gains from MMM fall into three categories, and they are worth being specific about rather than vague.

The first is reallocation efficiency. When you can see that one channel is operating in its saturation zone while another has significant headroom, shifting budget between them increases total revenue without increasing total spend. This is the most direct form of efficiency improvement and the one most marketers focus on. It is real and meaningful, but it requires the discipline to actually move budget rather than just note the finding.

The second is scenario planning. A good MMM lets you model what happens to revenue under different budget scenarios before you commit to them. You can test the impact of cutting a channel by thirty percent, or doubling investment in another, and see the projected revenue effect. This is particularly valuable during budget reviews when finance teams are asking for cuts. Having a model that shows the revenue cost of a proposed reduction is a much stronger position than defending spend on principle.

The third is longer-term channel mix decisions. Attribution models are almost useless for evaluating brand channels because brand investment builds over time and its effect on sales is distributed across weeks and months. MMM handles this through the adstock mechanism. That means you can finally get a defensible read on what your brand advertising is contributing to the business, which is something that has been a persistent blind spot in performance-led marketing teams for years. I have sat in enough budget meetings where brand spend was cut because it could not be attributed, only to watch performance metrics deteriorate six months later as the brand halo faded.

The Limitations You Need to Understand Before You Start

MMM is not a silver bullet and treating it like one is how teams end up making worse decisions with more sophisticated-looking data.

The model is only as good as the data you feed it. If your historical spend data is incomplete, inconsistently tracked, or missing channels, the model’s outputs will reflect those gaps. I have seen MMM projects stall for months because the client could not produce clean, consistent spend data going back far enough to build a reliable model. The minimum data requirement is typically two to three years of weekly data, and for businesses with strong seasonality, more is better. Pulling together that data is often the hardest part of the project.

The model also cannot tell you what you did not do. It measures the channels you used, at the spend levels you used them. Extrapolating beyond the range of your historical data, for example predicting the return from a channel you have never invested in at scale, involves significant uncertainty. The response curves are most reliable within the range of your actual historical investment.

There is also a granularity limitation. Traditional MMM works at a relatively high level of aggregation, typically channel or sub-channel by week. It cannot tell you which creative performed better, which audience segment responded more strongly, or which publisher within a channel drove the most value. For those questions, you need other tools. A/B testing frameworks within your analytics setup can complement MMM by answering the granular creative and audience questions that the model cannot.

Finally, MMM is a retrospective tool. It tells you what worked historically. Markets change, consumer behaviour shifts, and channel dynamics evolve. A model built on data from two years ago may not fully reflect the current competitive environment. This is why MMM outputs need to be refreshed regularly, not treated as a static truth.

How MMM Fits Into a Broader Measurement Stack

The most sophisticated measurement approaches I have seen do not rely on any single methodology. They use MMM alongside incrementality testing and platform attribution, treating each as a different lens on the same underlying reality.

Platform attribution, whether last-click, data-driven, or otherwise, is useful for in-platform optimisation. It helps you understand which keywords, audiences, and creatives are performing within a channel. But it should not be used to compare the value of channels against each other, because each platform measures itself differently and the double-counting problem makes cross-channel comparison unreliable.

Incrementality testing, which involves running controlled experiments to measure the causal effect of specific marketing activity, gives you the cleanest signal on whether a channel is genuinely driving incremental revenue. The limitation is that it is slow, expensive to run at scale, and requires holding back spend in a way that not all businesses are comfortable with. MMM is faster and cheaper but less causally precise. Used together, they provide a much stronger basis for decisions than either alone.

For businesses that are serious about this, the data infrastructure matters too. Exporting your analytics data into a warehouse environment gives you the flexibility to combine MMM outputs with behavioural data in ways that are not possible inside standard analytics platforms. Exporting GA4 data to BigQuery is a practical step toward that kind of integrated measurement environment.

Understanding how each of these tools fits together, and where each one falls short, is the foundation of honest measurement. The goal is not perfect data. It is honest approximation that reduces the size of the decisions you are making on gut feel alone.

Building the Business Case for MMM Investment

One of the practical questions that comes up is how to justify the cost of building or buying an MMM. The honest answer is that it depends on your media spend. For smaller budgets, the cost of a strong MMM project may not be recoverable from the efficiency gains it produces. For larger budgets, even a modest improvement in allocation efficiency can generate returns that dwarf the cost of the model.

The threshold varies, but a reasonable rule of thumb is that MMM starts to make economic sense when you are spending enough that a five to ten percent improvement in allocation efficiency would cover the cost of building the model. At significant media spend levels, that threshold is cleared comfortably.

There are now more accessible routes into MMM than there were five years ago. Open-source tools have lowered the barrier to entry for teams with data science capability in-house. Commercial platforms offer more turnkey solutions for teams that do not. Neither route is inherently better. What matters is whether the outputs are reliable, explainable, and actually used to inform decisions. A sophisticated model that sits in a slide deck and never changes a budget decision is worthless.

Early in my career, I learned that the value of any analytical tool is not in the tool itself but in what you do with it. I once built a reporting system from scratch because we could not afford the commercial alternative. It was imperfect. But because we understood exactly how it worked and trusted its outputs, we used it consistently to make decisions. That consistency was worth more than the precision we were missing.

What Good MMM Output Looks Like in a Media Plan

The practical translation from MMM outputs to a revised media plan involves a few specific steps that are worth making explicit.

Start with the contribution analysis. This tells you what percentage of your revenue each channel is driving, after controlling for external factors. Compare this to each channel’s share of your media budget. The gaps between contribution share and spend share are your first signal of where reallocation might be warranted.

Then move to the response curves. For channels where you are operating in the saturation zone, the curve will be flattening. For channels with headroom, it will still be steep. The optimal budget allocation, in a theoretical sense, is the one where the marginal return per pound is equalised across all channels. In practice, you will not hit that exactly, but using the curves to move toward it is a meaningful improvement over allocation by inertia.

Finally, run the scenario models. Before presenting a revised media plan to stakeholders, model the projected revenue impact of the proposed reallocation. This gives you a concrete, defensible number to put in front of finance or leadership: “shifting ten percent of budget from channel A to channel B is projected to increase revenue by X, based on the historical response curves.” That is a very different conversation from “we think we should try spending more here.”

Tracking the right metrics throughout this process matters. Understanding which marketing metrics actually reflect business outcomes versus which ones reflect activity is a prerequisite for using MMM outputs well. If your internal reporting is built around vanity metrics, the model’s outputs will be difficult to connect to the numbers your business actually cares about.

For a broader look at how MMM connects to the rest of your analytics and measurement infrastructure, the Marketing Analytics and GA4 hub covers the tools, frameworks, and approaches that sit alongside it.

The Discipline Problem

I want to be direct about something that does not get discussed enough in the MMM literature. The biggest barrier to realising efficiency gains from media mix modeling is not the model. It is the organisational willingness to act on what the model tells you.

Shifting budget away from a channel that has always been in the plan, even when the model says it is saturated, requires someone to make a decision that will be uncomfortable. If that channel underperforms after the cut, they will be blamed. If it was cut and something else goes wrong in the business, the budget decision will be questioned. The incentives inside most marketing organisations push toward keeping things roughly as they are, because the downside risk of a visible change is more salient than the upside of a more efficient allocation.

This is why the framing of MMM as a planning tool rather than a reporting tool matters. When the model is embedded in the annual planning process, with outputs reviewed by senior stakeholders and reallocation decisions made deliberately and documented, you get a different outcome than when it is a quarterly report that gets filed and forgotten.

The teams I have seen get the most value from MMM are the ones that treat it as a standing input to planning, not a one-off project. They refresh the model regularly, they track whether their reallocation decisions produced the projected results, and they use the gaps between projection and reality to improve the model over time. That feedback loop is what turns MMM from an interesting analytical exercise into a genuine source of competitive advantage.

Good measurement discipline also means being honest about what you are tracking and why. Reporting frameworks that connect channel metrics to business outcomes are a useful reference point for building that kind of discipline across your marketing function, not just in the channels you are modeling.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

How much historical data do you need to build a reliable media mix model?
Most practitioners recommend a minimum of two years of weekly data, and ideally three or more for businesses with strong seasonal patterns. The model needs enough variation in spend levels and external conditions to reliably separate the effects of different channels from each other and from factors like seasonality and price changes. Shorter data sets produce less stable coefficient estimates and wider confidence intervals around the outputs.
Can media mix modeling measure digital channels as effectively as traditional ones?
Yes, but with some nuances. Digital channels tend to have cleaner spend data and shorter response windows, which makes them easier to model in some respects. The challenge is that digital channels often have strong within-channel attribution data that teams are already relying on, which can create friction when the MMM tells a different story. Traditional channels like TV and out-of-home are often where MMM adds the most distinctive value, because they have no meaningful attribution data at all and their contribution is otherwise invisible in most measurement frameworks.
How often should you refresh a media mix model?
At a minimum, annually, aligned with your planning cycle. For businesses in fast-moving categories or those that have made significant changes to their channel mix, every six months is more appropriate. The model’s outputs become less reliable as market conditions diverge from the historical period it was trained on. Treating it as a living tool that is updated regularly produces better decisions than treating it as a one-off project.
What is the difference between media mix modeling and multi-touch attribution?
Multi-touch attribution assigns credit for conversions to touchpoints in a user’s experience, using data collected at the individual or session level. It is useful for understanding the path to conversion within digital channels but relies on tracking data that is increasingly incomplete due to privacy changes and cross-device behaviour. Media mix modeling works at an aggregate level using business outcomes data, which makes it independent of tracking limitations. The two approaches answer different questions and work best when used together rather than as substitutes for each other.
Does media mix modeling work for businesses with small media budgets?
The economics become challenging at lower spend levels. Building a strong MMM requires meaningful investment in data preparation, model development, and ongoing maintenance. For businesses spending below a certain threshold, the efficiency gains that the model might identify are unlikely to cover that cost. Simpler approaches, such as channel-level incrementality tests or structured budget experiments, often deliver more value per pound spent for smaller advertisers. MMM is most clearly justified when the scale of spend means that even marginal improvements in allocation efficiency produce significant returns.

Similar Posts