Media Forecast: How to Plan When the Numbers Keep Moving

A media forecast is a structured projection of how advertising spend, channel mix, and audience reach will perform over a defined planning period. Done well, it gives commercial teams a working model for budget allocation, scenario planning, and accountability. Done badly, it becomes a spreadsheet that sounds authoritative but is really just organised guesswork dressed up as strategy.

Most forecasts fail not because the data is wrong, but because the assumptions underneath them are never challenged. This article is about how to build a media forecast that holds up when conditions change, which they always do.

Key Takeaways

  • A media forecast is only as reliable as the assumptions it rests on. Surface those assumptions early and revisit them quarterly.
  • Lower-funnel channels are easier to measure, which makes them easier to over-invest in. Forecasts should account for the full funnel, not just the part that looks cleanest in a dashboard.
  • Scenario planning is not pessimism. Building best, base, and downside cases into your forecast is the difference between a plan and a bet.
  • Channel-level CPM and CPC trends shift faster than most annual plans account for. Build in a review cadence, not just a planning cadence.
  • The most dangerous number in any media forecast is the one nobody questioned in the room.

What Is a Media Forecast and Why Does It Keep Going Wrong?

When I was running the performance division at iProspect, we were managing hundreds of millions in annual ad spend across dozens of clients. Every quarter, someone would produce a forecast. And every quarter, the forecast would be presented with a confidence it had not earned. The numbers looked precise. The decimal places implied certainty. But underneath them were assumptions about click-through rates, conversion rates, and CPMs that had been copied forward from the previous period without anyone asking whether the market had moved.

A media forecast goes wrong for one of three reasons. Either the input data is unreliable, the model is too rigid to absorb market changes, or the people building it have an incentive to make the numbers look a certain way. The third one is the most common and the least discussed.

Agencies forecast to retain budgets. In-house teams forecast to justify headcount. Neither of those incentives produces honest numbers. If you want a media forecast that actually helps you make decisions, you need to separate the forecasting process from the budget defence process. They are not the same thing, and conflating them is where most planning cycles go quietly wrong.

What Should a Media Forecast Actually Include?

A working media forecast covers five things: projected spend by channel, expected reach and frequency, estimated cost efficiency metrics by channel, revenue or outcome projections tied to spend levels, and the assumptions underpinning all of the above.

That last part, the assumptions, is the section most forecasts either omit entirely or bury in an appendix nobody reads. It should be the first page. If your forecast assumes a 3% click-through rate on paid search, write it down. If it assumes CPMs will hold flat year on year, write that down too. Because when the forecast is wrong, and it will be wrong to some degree, the assumptions page is where you go to understand why. Without it, you are just looking at the gap between prediction and outcome with no diagnostic framework to learn from.

Channel-level projections should account for both the cost environment and the competitive landscape. Paid search CPCs in competitive categories have moved significantly over the past few years as more advertisers compete for the same intent signals. A forecast built on last year’s average CPC without accounting for that pressure will systematically underestimate cost and overestimate reach.

If you are thinking about how media forecasting fits into a broader growth planning process, the articles in the Go-To-Market and Growth Strategy hub cover the commercial frameworks that sit behind channel decisions and budget allocation.

How Do You Handle the Full Funnel Without Losing Accountability?

Earlier in my career, I overvalued lower-funnel performance channels. The attribution looked clean, the numbers were trackable, and clients loved the dashboards. It took me longer than I would like to admit to recognise that a significant portion of what those channels were being credited for was going to happen regardless. You can spend a lot of money capturing demand that already existed without ever creating new demand. And you can build a media forecast that looks excellent on paper while the business slowly runs out of new customers to reach.

Think about it this way. A person who walks into a clothes shop and tries something on is far more likely to buy than someone browsing online. But if the shop stops advertising entirely and just waits for people to walk in, footfall eventually drops. The people who were always going to buy still do, for a while. Then the pipeline empties. Lower-funnel performance marketing has the same dynamic. It can look like it is working right up until the moment the audience pool dries up.

A media forecast that only models the lower funnel is not a full forecast. It is a capture model. To build something more honest, you need to project awareness reach, consideration metrics, and the lag effects between upper-funnel investment and lower-funnel conversion. That lag is uncomfortable to model because it requires you to make claims about causation that attribution tools will not confirm. But the discomfort is not a reason to ignore it. It is a reason to get better at reasoning about it.

For brands thinking about how to structure demand creation versus demand capture in their channel mix, the market penetration frameworks covered by Semrush offer a useful starting point for understanding where growth actually comes from.

What Does Scenario Planning Look Like in a Media Forecast?

The most useful thing I ever added to a media forecast was a second tab. Not a second sheet with more data, but a second model built on different assumptions. A base case and a downside case, side by side, with a clear articulation of what would have to be true for each one to play out.

Scenario planning in media forecasting is not about predicting the future. It is about making explicit the range of outcomes that are plausible given what you know, and making sure the business is not betting everything on the most optimistic version of events. I have sat in too many planning sessions where the forecast was presented as a single number, the budget was set against that number, and then six months later everyone was surprised when conditions changed. The surprise was not the problem. The lack of a contingency framework was.

A practical scenario model for media should include at minimum three cases. The base case reflects current market conditions with modest improvement assumptions. The upside case reflects what happens if CPMs soften, conversion rates improve, or a new channel outperforms expectations. The downside case reflects what happens if costs rise, a major platform changes its algorithm, or a competitor increases spend aggressively in your category.

Each scenario should have a trigger point: a specific metric or threshold that tells you which scenario you are actually in. That way, the forecast becomes a live decision tool rather than a static document you revisit once a year and quietly update to match what actually happened.

BCG’s work on aligning marketing and commercial planning processes is worth reading for anyone who wants to understand how the best-run organisations connect media forecasting to broader business planning rather than treating it as a separate exercise.

One of the most consistent errors I see in media forecasts is the assumption that channel costs will behave the way they behaved last year. Sometimes they do. Often they do not. And the channels where costs are most likely to shift are usually the ones with the highest concentration of advertiser demand, which tends to mean the channels you are already most heavily invested in.

Paid social CPMs have historically shown significant seasonal variation, with Q4 reliably more expensive than Q1 across most categories. If your forecast treats CPM as a flat annual average, you will systematically overestimate reach in Q4 and underestimate it in Q1. That sounds like a minor modelling issue. In practice, it means your campaign performance looks worse in the period when it matters most to most businesses, and your planning cycle never fully accounts for why.

Paid search cost trends are driven by competitive density as much as by platform pricing. In categories where a handful of well-funded competitors are aggressively bidding for the same keywords, CPC can move substantially within a single quarter. A media forecast that does not include a competitive spend monitoring process alongside the financial model is missing a significant input.

Programmatic display and video costs are influenced by inventory availability, publisher relationships, and the broader health of the open web ecosystem. These are harder to forecast with precision, which is exactly why they need to be modelled with explicit uncertainty ranges rather than point estimates.

The intelligent growth model thinking from Forrester is useful context here. The point about building forecasts that are honest about uncertainty rather than artificially precise is one that applies directly to channel cost modelling.

How Do You Connect Media Forecast to Business Outcomes?

The reason most media forecasts feel disconnected from the business is that they are built in isolation from it. A media team produces a forecast in terms of impressions, clicks, and cost per acquisition. A commercial team produces a revenue forecast in terms of units, margins, and customer lifetime value. The two documents rarely talk to each other until something goes wrong.

When I was turning around a loss-making agency, one of the first things I did was insist that every media forecast had a revenue line attached to it. Not a vague correlation, but an explicit model: if we spend X at this efficiency, we expect to generate Y in pipeline or revenue. That model would be wrong. I knew it would be wrong. But the act of building it forced conversations between the media team and the commercial team that had not been happening. And those conversations surfaced assumptions on both sides that needed to be stress-tested.

Connecting media forecasts to business outcomes requires three things. First, a shared definition of what counts as a conversion and what the value of that conversion is. Second, an agreed model for how media channels contribute to that conversion, including the channels that contribute earlier in the process and are harder to attribute directly. Third, a review process that compares forecast to actuals at the outcome level, not just the channel efficiency level.

That last point matters more than most teams realise. A channel can hit its CPA target while the business misses its revenue target. If your forecast review only looks at channel metrics, you will never see that disconnect clearly enough to act on it. The BCG work on go-to-market strategy and commercial alignment makes a similar point about the gap between marketing efficiency metrics and commercial outcomes.

What Role Does Measurement Infrastructure Play?

A media forecast is only as good as the measurement infrastructure behind it. If your attribution model is broken, your historical data is unreliable, and your historical data is unreliable, your forecast inputs are unreliable. That does not mean you cannot build a useful forecast. It means you need to be honest about the quality of the data you are working with.

I have judged the Effie Awards, which means I have read a lot of effectiveness case studies written by very good agencies. One pattern I noticed repeatedly: the strongest cases were not the ones with the cleanest data. They were the ones where the team had been honest about measurement limitations and built their argument around multiple converging signals rather than a single attribution claim. That approach to evidence is exactly what good media forecasting requires.

When your measurement infrastructure has gaps, the right response is to be explicit about them in the forecast. Label the estimates that are based on modelled data versus observed data. Flag the channels where attribution confidence is lower. Build in a wider uncertainty range for those channels. This makes the forecast less tidy, but it makes it more useful as a decision tool.

The alternative, smoothing over the gaps and presenting false precision, creates a different problem. When the forecast is wrong, and the uncertainty was never surfaced, the conversation becomes about who got the numbers wrong rather than what the numbers were actually telling you. That is a waste of time and a source of organisational friction that compounds over multiple planning cycles.

For teams thinking about how to build growth models that are honest about uncertainty, the growth strategy examples from Semrush illustrate how the best-performing teams combine structured planning with iterative testing rather than betting everything on a single forecast model.

How Often Should You Revisit a Media Forecast?

Most organisations build an annual media forecast and then update it once, at the half-year mark, if they update it at all. That cadence made more sense when media markets moved more slowly. It does not reflect the speed at which channel costs, platform algorithms, and competitive dynamics now shift.

A more useful approach is to treat the annual forecast as a strategic frame and the quarterly review as the operational tool. The annual forecast sets the direction: total budget envelope, channel mix rationale, outcome targets, and the assumptions underpinning all of it. The quarterly review tests those assumptions against what is actually happening in the market and adjusts the operational plan accordingly.

The quarterly review should answer four questions. Are channel costs tracking in line with forecast assumptions? Are conversion rates and efficiency metrics holding? Are there signals from the competitive environment that should change the channel mix? And are the business outcomes we are targeting still achievable on the current trajectory?

If the answer to any of those questions is no, the response should not be to revise the forecast to match the actuals. It should be to understand why the gap exists and whether it requires a change in strategy, a change in execution, or a change in the assumptions the forecast was built on. Those are three different problems with three different solutions, and conflating them is how organisations end up in planning cycles that never improve.

For more on how media planning connects to the broader commercial decisions that drive growth, the Go-To-Market and Growth Strategy hub covers the strategic frameworks that sit behind channel investment decisions, budget allocation, and market entry planning.

What Are the Most Common Forecasting Mistakes?

The first mistake is anchoring too heavily on last year’s numbers. Historical performance is a useful starting point, not a reliable predictor. Markets change, competitors change, platforms change. A forecast built primarily on prior-year actuals without adjusting for current market conditions is not a forecast. It is a projection of the past onto the future.

The second mistake is treating the forecast as a commitment rather than a model. A forecast is your best current estimate of what will happen given what you know. It is not a promise. When organisations treat it as a commitment, teams stop being honest about uncertainty because they are afraid of being held to a number they cannot control. That incentive structure produces worse forecasts, not better ones.

The third mistake is building the forecast in the media team without commercial input. I have seen this happen at agencies and at client-side organisations. The media team produces a detailed channel-level forecast. The commercial team produces a revenue forecast. The two are presented in the same planning meeting without ever having been built together. The result is a planning process that looks integrated but is not.

The fourth mistake is not distinguishing between reach and frequency in the forecast model. A channel can deliver a large number of impressions at low cost while reaching the same small audience repeatedly. If your forecast models gross impressions without modelling unique reach, you will systematically overestimate the breadth of your media investment.

The fifth mistake is the one that is hardest to fix: building a forecast that tells people what they want to hear. I have been in rooms where the forecast was worked backwards from the budget that had already been approved. The numbers were adjusted until they produced the outcome the business needed to justify the spend. Everyone in the room knew what was happening. Nobody said it out loud. That kind of forecasting is not planning. It is performance. And it produces organisations that are very good at presenting numbers and very bad at learning from them. The Forrester analysis of go-to-market struggles touches on exactly this pattern: the gap between what organisations plan and what they are actually capable of executing.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a media forecast in marketing?
A media forecast is a structured projection of advertising spend, channel mix, reach, and expected outcomes over a defined planning period. It provides a working model for budget allocation and performance accountability, and should include explicit assumptions so that gaps between forecast and actuals can be diagnosed and learned from.
How often should a media forecast be updated?
Annual forecasts should be treated as strategic frames, with quarterly reviews used as the operational tool. Each quarterly review should test the assumptions in the original forecast against current market conditions, channel cost trends, and business outcome tracking. If conditions have shifted materially, the plan should be adjusted rather than left to drift.
What is scenario planning in a media forecast?
Scenario planning in a media forecast involves building multiple versions of the plan, typically a base case, an upside case, and a downside case, each based on different assumptions about channel costs, conversion rates, and competitive conditions. Each scenario should include a trigger point: a specific metric that tells you which scenario is playing out so the plan can be adjusted accordingly.
Why do media forecasts so often miss their targets?
Media forecasts miss targets for three main reasons: unreliable input data, models that are too rigid to absorb market changes, and incentive structures that push teams to produce optimistic rather than honest projections. The most common cause in practice is the third one. When forecasts are used to justify budgets rather than inform decisions, the numbers tend to reflect what people want to be true rather than what the evidence supports.
How do you connect a media forecast to business revenue targets?
Connecting a media forecast to revenue targets requires a shared definition of conversion value, an agreed model for how each channel contributes to that conversion across the full funnel, and a review process that compares forecast to actuals at the business outcome level rather than just the channel efficiency level. Building this connection requires the media team and the commercial team to work from the same planning assumptions, which in most organisations requires a deliberate process change.

Similar Posts