Marketing Mix Modeling Is Having a Moment. Here Is Why That Matters

Marketing mix modeling (MMM) has moved from a tool used by a handful of large consumer goods companies to something most serious marketing teams are actively evaluating or rebuilding. The shift is being driven by the collapse of reliable cookie-based attribution, growing pressure to justify spend across channels, and the arrival of faster, more accessible modeling approaches that no longer require a team of econometricians and six months of runway. If you have been watching this space, the last two years have produced more meaningful developments than the previous decade combined.

Key Takeaways

  • MMM is experiencing a genuine resurgence driven by the deprecation of third-party cookies and the limits of last-click attribution, not just vendor marketing cycles.
  • Modern MMM runs faster and at lower cost than legacy approaches, making it viable for mid-market businesses, not just enterprise advertisers.
  • The biggest risk with MMM is treating model outputs as ground truth rather than as one informed perspective on a complex system.
  • Combining MMM with incrementality testing produces more reliable decisions than relying on either method alone.
  • The companies getting the most value from MMM are using it to make strategic budget decisions, not to optimise individual campaigns in real time.

Why MMM Is Back in the Conversation

Econometric modeling for marketing is not new. Companies were running regression-based media mix models in the 1990s. What is new is the context around it. For most of the 2010s, digital attribution felt like it had solved the measurement problem. You could see which keywords converted, which display placements assisted, which email sent someone to the checkout. The data was granular, fast, and cheap to access. Most marketers, including me, spent years optimising inside that framework without seriously questioning its foundations.

The problem is that digital attribution was always measuring a narrow slice of reality. It captured what happened inside the tracked funnel and called it the full picture. When I was running agency teams managing hundreds of millions in ad spend, we would see campaigns that looked efficient in the platform dashboards but were not moving the business numbers. Revenue would plateau while ROAS looked healthy. The attribution model said everything was working. The P&L said something else entirely.

That tension is exactly what MMM is designed to resolve. It works from the top down, using historical sales data and media investment data to estimate the contribution of each channel to overall business outcomes. It does not rely on cookies, pixels, or individual user tracking. That makes it structurally more strong in a world where signal quality from digital tracking is declining, and it means it can account for channels like TV, radio, and out-of-home that digital attribution handles poorly if at all.

For a broader look at how measurement approaches are evolving across the analytics landscape, the Marketing Analytics hub covers the full picture, from MMM to GA4 to incrementality testing.

What Has Actually Changed in the Last Two Years

What Has Actually Changed in the Last Two Years

The most significant shift is not conceptual, it is operational. Legacy MMM projects were expensive, slow, and opaque. A traditional model might take three to six months to build, cost a significant amount in consulting fees, and produce outputs that were difficult to interrogate or update. By the time the results arrived, the media plan they were meant to inform had already been set.

What has changed is the tooling. Open-source frameworks like Meta’s Robyn and Google’s Meridian have made the underlying methodology accessible to in-house data science teams without requiring a specialist econometrics consultancy. Cloud computing has reduced the cost of running complex models. And the modeling approaches themselves have evolved, with Bayesian methods gaining ground because they handle uncertainty more honestly than frequentist approaches and allow prior knowledge to be incorporated into the model rather than ignored.

Faster iteration is the practical consequence. Teams that previously refreshed their MMM annually are now running quarterly or even monthly updates. That changes what you can do with the output. Instead of using it purely for annual budget planning, you can use it to monitor whether your channel mix is drifting out of balance, to sense-check large spend decisions before committing, and to build a more reliable view of diminishing returns curves by channel.

The Forrester perspective on black-box marketing analytics is worth reading in this context. The warning about models that produce outputs without explainability applies directly to MMM. Faster and cheaper only helps if the model is transparent enough to be interrogated and challenged.

Who Is Actually Adopting MMM Now

For most of its history, MMM was the preserve of large FMCG companies and major retailers with decades of clean sales data and the budget to commission bespoke econometric studies. That is no longer the defining profile of an MMM user.

Direct-to-consumer brands, subscription businesses, and mid-market companies with annual media budgets in the low millions are now running models. The threshold has dropped considerably. The practical floor is roughly three years of weekly sales data and media spend data broken out by channel. If you have that, you have enough to build a meaningful model. The outputs will be less precise than those from a company with twenty years of data across thirty markets, but they will still be more honest than a last-click attribution report.

What I find interesting about the current wave of adoption is that it is not being driven purely by data science teams. It is being driven by CMOs and finance directors who have grown sceptical of platform-reported ROAS and want a measurement approach that is independent of the platforms doing the spending. That is a healthy instinct. Asking Google Analytics to tell you whether Google Ads is working has always been a structurally compromised question.

The Limitations That Are Not Being Talked About Enough

MMM is genuinely useful. It is also genuinely imperfect, and the current enthusiasm for it risks repeating the same mistake the industry made with digital attribution: treating a useful approximation as a definitive answer.

The first limitation is data quality. MMM is only as good as the data going into it. If your media spend data is inconsistent, if channel definitions have changed over time, or if your sales data has gaps or anomalies, the model will absorb those problems and produce outputs that look authoritative but are built on shaky foundations. I have seen this happen with analytics implementations more broadly. When I was working with agency clients whose analytics setups were poorly structured, the dashboards looked clean but the numbers were not telling the truth. The same principle applies here.

The second limitation is that MMM works at an aggregate level. It tells you how channels have performed on average over the historical period you are modeling. It does not tell you how a specific campaign performed, how performance varies by audience segment, or what is happening at the margin right now. For those questions, you need other tools.

The third limitation is the assumption of stationarity, the idea that the relationships between media inputs and sales outputs are stable over time. They are not. Consumer behaviour shifts, competitive dynamics change, and creative quality varies. A model trained on historical data will not automatically account for structural breaks in those relationships. Modelers know this, but it does not always get communicated clearly to the marketing leaders using the outputs.

There is a useful parallel in how analytics tools are discussed more broadly. Moz’s overview of Google Analytics alternatives makes the point that different tools measure different things and no single tool gives you the complete picture. The same logic applies to measurement methodologies. MMM is a perspective, not a verdict.

How MMM and Incrementality Testing Fit Together

The most sophisticated measurement setups are not choosing between MMM and incrementality testing. They are using both, with each method informing and pressure-testing the other.

MMM gives you a strategic view across all channels over a long time horizon. Incrementality testing gives you a causal answer to a specific question about a specific channel or campaign at a specific point in time. The two approaches have different strengths and different failure modes. When they agree, you have more confidence in the conclusion. When they disagree, you have a more interesting and more valuable question to investigate.

Early in my career, I learned that the most dangerous number in marketing is one that appears to confirm what you already believed. When I was at lastminute.com running paid search campaigns, we saw results that looked extraordinary in the platform reports. The temptation was to scale immediately and ask questions later. The more disciplined approach was to ask whether the numbers were telling us something real or just reflecting the limitations of how we were measuring. That instinct, to interrogate results rather than celebrate them, is exactly what a combined MMM and incrementality approach formalises.

The Forrester piece on what to do after you have a marketing dashboard makes a related point: having the data is not the same as having the insight. The measurement infrastructure is the starting point, not the destination.

The Budget Allocation Question MMM Is Actually Answering

The most practical application of MMM outputs is budget allocation across channels, specifically identifying where you are over-investing relative to the marginal return you are getting, and where there is room to scale before diminishing returns set in.

This is where MMM earns its keep. Attribution models tend to produce a ranking of channels by conversion volume or assisted conversions. That is useful for understanding the funnel, but it does not tell you what would happen if you moved money from one channel to another. MMM, done well, gives you response curves by channel that let you model the impact of budget shifts before you make them.

When I was growing an agency from around twenty people to over a hundred, one of the persistent challenges was helping clients understand that the channel getting the most conversions in their attribution report was not necessarily the channel that would benefit most from additional investment. A channel can be highly efficient at current spend levels and completely saturated at higher spend levels. MMM surfaces that distinction in a way that attribution cannot.

The practical implication for most marketing teams is that MMM should be informing the annual and quarterly budget planning process, not replacing the day-to-day optimisation work happening inside platforms. These are different questions operating at different time horizons, and conflating them is a common source of frustration with MMM outputs.

What to Watch in the MMM Space Over the Next 12 Months

A few developments are worth tracking if you are evaluating MMM or currently running a model.

The first is the maturation of Google’s Meridian. It was released as an open-source framework in early 2024 and represents a significant investment by Google in making Bayesian MMM more accessible. The fact that Google is investing in a methodology that is explicitly independent of platform-reported metrics is notable. It reflects the reality that marketers are demanding measurement approaches they can trust, and that the platforms have an interest in being part of that conversation rather than being excluded from it.

The second is the growing interest in near-real-time MMM. Traditional models refresh quarterly or annually. Some teams are experimenting with weekly model updates using automated pipelines. The challenge is that more frequent updates require more careful handling of noise versus signal, and there is a real risk of over-reacting to short-term fluctuations that the model interprets as meaningful trends. The principles of sound analytics practice from MarketingProfs remain relevant here: more data points do not automatically produce better decisions if the analytical framework is not disciplined.

The third development is the integration of MMM outputs into media planning and buying platforms. Several demand-side platforms are beginning to incorporate MMM-derived budget recommendations directly into their interfaces. This is convenient, but it also introduces the same conflict of interest that makes platform-reported attribution unreliable. An MMM model that lives inside a media buying platform has an obvious structural incentive to recommend spending more on that platform. Treat those integrations with appropriate scepticism.

For teams building out their measurement capabilities more broadly, the Marketing Analytics hub covers the full range of approaches, including how MMM sits alongside GA4, incrementality testing, and other measurement frameworks. Getting the architecture right matters as much as the individual tools.

Getting Started Without Getting Overwhelmed

If you are evaluating MMM for the first time, the practical starting point is an honest audit of your data. Do you have three or more years of weekly sales data? Is your media spend data broken out by channel in a consistent way over that period? Are there major structural breaks in the business, a rebrand, a market entry, a significant product change, that would make the historical data less comparable? The answers to those questions will tell you more about your MMM readiness than any vendor pitch.

The second step is being clear about what question you are trying to answer. MMM is well suited to strategic budget allocation questions and long-term channel mix decisions. It is not well suited to campaign-level optimisation or real-time decision-making. If your primary measurement problem is something other than budget allocation, MMM may not be the right starting point.

The third step is thinking carefully about who builds and maintains the model. In-house data science teams using open-source frameworks like Robyn or Meridian can produce credible outputs at low cost, but they need enough domain knowledge to make sensible modeling decisions and enough independence to report results honestly even when those results are inconvenient. External consultancies bring expertise but can introduce their own incentives. Neither option is inherently better. What matters is that whoever is running the model understands both the methodology and the business context well enough to know when the outputs are trustworthy and when they should be questioned.

The Buffer overview of content marketing metrics makes a point that applies equally well here: the goal is not to collect more measurement, it is to make better decisions. MMM is a means to that end, not an end in itself. The teams that get the most value from it are the ones that treat model outputs as one input into a decision, not as the decision itself.

When I built my first website by teaching myself to code because the budget was not available to outsource it, the lesson was not about technical skills. It was about understanding a system well enough to work with it honestly, rather than treating it as a black box that produces answers. That is exactly the mindset that good MMM practice requires. You do not need to be an econometrician. You do need to understand what the model is doing, what assumptions it is making, and what it cannot see.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is marketing mix modeling and how does it differ from attribution?
Marketing mix modeling uses historical sales and media spend data to estimate the contribution of each channel to overall business outcomes using statistical regression. Attribution tracks individual user journeys through a funnel and assigns credit to touchpoints. MMM works top-down from aggregate data and does not require cookies or user-level tracking. Attribution works bottom-up from individual interactions. The two approaches answer different questions and have different blind spots, which is why the most strong measurement setups use both.
How much data do you need to run a marketing mix model?
The practical minimum is around two to three years of weekly data covering sales outcomes and media spend broken out by channel. More data produces more reliable outputs, particularly for modeling seasonality and long-term brand effects. The data also needs to be consistent over the period being modeled. If channel definitions, business structure, or market conditions have changed significantly, those breaks need to be accounted for in the model or they will distort the results.
Is marketing mix modeling only viable for large companies?
It used to be, but that has changed. Open-source frameworks like Meta’s Robyn and Google’s Meridian have made the methodology accessible to in-house data science teams without requiring expensive specialist consultancies. Mid-market businesses with annual media budgets in the low millions and sufficient historical data are now running credible models. The outputs will be less precise than those from a large enterprise with decades of data, but they are still more structurally honest than platform-reported attribution.
What are the main limitations of marketing mix modeling?
Three limitations matter most. First, MMM is only as reliable as the data going into it. Poor data quality produces outputs that look authoritative but are not. Second, MMM works at an aggregate level and cannot tell you how individual campaigns or audience segments performed. Third, MMM assumes the relationships between media inputs and sales are relatively stable over time, which is not always true. Structural changes in the business, market, or competitive environment can make historical patterns a poor guide to current reality.
How often should you update a marketing mix model?
Most teams refresh their models quarterly or annually, which aligns with budget planning cycles. Some teams with automated data pipelines are experimenting with monthly or even weekly updates, but more frequent refreshes require careful handling to avoid over-reacting to short-term noise. The right cadence depends on how quickly your media mix and market conditions change, and whether the model is being used for strategic planning or more tactical decision-making. Annual models are appropriate for budget setting. Quarterly models are better if you want to monitor channel mix drift over the course of a year.

Similar Posts