Marketing Forecasts Are Wrong. Here’s How to Make Them Useful.
Forecasting marketing results means estimating the likely output of your marketing activity before you spend the money, so you can make better decisions about where to invest, what to expect, and when to change course. It is not about predicting the future with precision. It is about replacing gut feel with a structured, honest approximation that the business can plan around.
Most forecasts fail not because the math is wrong, but because the assumptions underneath them are never challenged. Fix the assumptions and the forecast becomes useful. Leave them unchallenged and you are just producing a number that makes the board feel better until it doesn’t.
Key Takeaways
- A marketing forecast is only as reliable as the assumptions it rests on. Start there, not with the model.
- Attribution tools tell you what happened in your data, not what caused it. Treat them as one input, not the answer.
- Lower-funnel metrics are easier to measure but frequently overstate marketing’s actual contribution to growth.
- Scenario planning, with a base, upside, and downside case, is more useful to a business than a single-point forecast.
- The goal is honest approximation. False precision destroys credibility faster than a wide confidence interval ever will.
In This Article
- Why Most Marketing Forecasts Are Built on Shaky Ground
- What a Useful Marketing Forecast Actually Looks Like
- How to Build the Forecast: A Practical Framework
- The Attribution Problem You Cannot Ignore
- How to Handle the Channels That Are Hard to Measure
- What to Do When Your Forecast Is Wrong
- The Honest Conversation About Forecasting Limits
Why Most Marketing Forecasts Are Built on Shaky Ground
I spent the first half of my career working with performance data every day. Click-through rates, cost per acquisition, return on ad spend. It all felt very measurable, very accountable. And for a long time, I thought that measurability meant accuracy. It doesn’t.
What performance data mostly tells you is what happened inside your tracking infrastructure. It tells you who clicked, who converted, which campaign got the last touch. What it doesn’t tell you is how much of that would have happened anyway. A customer who has already decided to buy will click your retargeting ad on the way to the checkout. Your model records a conversion. Your forecast assumes you caused it. You probably didn’t.
This is the foundational problem with most marketing forecasts. They are built on attribution data that systematically overstates the contribution of lower-funnel activity, and they treat correlation as causation without ever questioning it. When I was running agency teams and we built forecasts for clients, the number one mistake was anchoring on last-period performance and projecting it forward. It felt rigorous. It rarely was.
If you want to understand what’s making go-to-market execution harder right now, a lot of it comes down to this: the signals marketers have relied on for forecasting are becoming less reliable at the same time that business expectations are becoming more demanding. That gap is where forecasting discipline matters most.
What a Useful Marketing Forecast Actually Looks Like
A useful forecast is not a single number. It is a range, built on explicit assumptions, that gives the business a credible view of what marketing can deliver under different conditions. It has three components: inputs, assumptions, and scenarios.
Inputs are the things you can measure with reasonable confidence. Historical conversion rates, average order values, audience sizes, cost-per-click benchmarks, email open rates. These are your building blocks. They are imperfect, but they are real data and you should use them.
Assumptions are where most forecasts go wrong. Every forecast contains assumptions, whether you state them explicitly or not. If you don’t state them, nobody can challenge them, and you end up defending a number nobody understands when it misses. If you do state them, the whole organisation can have a productive conversation about whether they are reasonable. I have always preferred the latter, even when it’s uncomfortable.
Scenarios give the business something to plan around. A base case built on realistic assumptions. An upside case if key variables outperform. A downside case if they don’t. When I was managing turnaround situations at agency level, scenario planning was the thing that kept leadership teams honest. It forced them to ask what they would do if the downside materialised, rather than waiting until it did.
How to Build the Forecast: A Practical Framework
This is not a template you fill in. It is a thinking process. The output will look different depending on whether you are forecasting paid search, content marketing, or a full go-to-market launch. But the logic is the same.
Step 1: Define what you are forecasting
Revenue contribution, leads generated, pipeline created, brand awareness shift, market share movement. Be specific. “Marketing results” is not a forecast. “Qualified leads from paid search in Q2” is. The more specific you are about what you are forecasting, the more honest you have to be about what you actually know.
One thing I learned from judging the Effie Awards is that the campaigns that demonstrate genuine effectiveness are always built around a specific, measurable objective. Not “drive awareness and consideration and purchase intent.” One thing. The same principle applies to forecasting.
Step 2: Identify your conversion chain
Map the steps between marketing activity and the business outcome you are forecasting. Impressions to clicks, clicks to landing page visits, visits to leads, leads to qualified opportunities, opportunities to closed revenue. Each step has a conversion rate. Each conversion rate is an assumption. Write them all down.
When you write them down, you will often find that some of them are based on solid historical data, some are educated guesses, and some are things you have never actually measured. That distinction matters. Flag the uncertain ones. They are where your forecast will break.
Step 3: Apply market-level reality checks
Your internal data only tells you about the people who already found you. It tells you nothing about the size of the opportunity you haven’t touched. Before you project forward, ask whether the market supports the numbers you are assuming. Market penetration analysis is useful here. If you are already reaching a significant proportion of the addressable market, projecting continued growth at the same rate is not realistic.
This is the mistake I see most often in growth forecasts. A team builds a model based on performance to date, which reflects a period of relatively easy growth, and then projects that rate forward into a market that is either saturating or becoming more competitive. The model looks right. The underlying logic doesn’t hold.
Step 4: Separate demand capture from demand creation
This is the most important distinction in marketing forecasting, and it is almost never made explicitly. Demand capture is what happens when someone is already looking for what you sell and your marketing ensures they find you. Demand creation is what happens when marketing introduces someone to a need or a solution they weren’t already considering.
Lower-funnel channels, paid search in particular, are primarily demand capture. They are efficient and measurable, and they are also finite. There is only so much existing demand to capture. If your forecast for growth is built entirely on capturing more existing demand, you will hit a ceiling faster than you expect.
Upper-funnel activity, content, brand, social, is harder to measure but it is where new demand comes from. When I was growing agency headcount from 20 to 100 people, the businesses that grew fastest weren’t the ones optimising their capture channels hardest. They were the ones investing in reaching audiences who didn’t know they needed them yet. The forecast models for that kind of activity are less precise, but they are not less important.
This connects directly to the broader question of how marketing fits into a growth strategy. If you’re working through how to structure that thinking, the Go-To-Market and Growth Strategy hub covers the full range of decisions that sit around and underneath marketing forecasting.
Step 5: Build your three scenarios
Take your base case conversion rates and run three versions of the model. In the base case, use your most realistic assumptions. In the upside case, improve your key conversion assumptions by a defensible amount, not wishful thinking. In the downside case, degrade them. The downside case is the most important one for business planning because it forces the question: if this happens, what do we do?
Present all three to leadership with the assumptions visible. If the downside case would cause a real problem for the business, that is a planning conversation you need to have before you spend the budget, not after.
The Attribution Problem You Cannot Ignore
Any serious discussion of marketing forecasting has to deal with attribution honestly. Attribution models are useful tools for understanding patterns in your data. They are not accurate representations of causality.
Last-click attribution gives all the credit to the final touchpoint before conversion. It flatters paid search and email and systematically undervalues brand, content, and anything that happens early in the customer experience. First-click has the opposite problem. Data-driven attribution is more sophisticated, but it is still working from correlation in your own data set, and it has no way of accounting for what would have happened without your marketing at all.
The honest answer is that no attribution model tells you the counterfactual. To get closer to causality, you need incrementality testing: running controlled experiments where you withhold marketing from a segment of your audience and compare outcomes. This is harder to set up than reading a dashboard, but it is the only way to know whether your marketing is actually driving results or just showing up at the moment people were going to convert anyway.
I have seen attribution models used to justify budget decisions that were essentially circular. The model credited a channel, the channel got more budget, the model credited it more. Nobody tested whether the underlying assumption was true. Understanding growth loops and feedback mechanisms helps here, because it forces you to think about whether you are measuring a genuine loop or a measurement artifact.
How to Handle the Channels That Are Hard to Measure
Content marketing, brand advertising, PR, social organic. These are the channels that move the needle on long-term growth but resist the kind of precise measurement that makes a CFO comfortable. Most marketers either avoid forecasting them at all, or they bolt on vanity metrics that don’t connect to business outcomes.
A better approach is proxy metrics. You can’t directly measure the revenue impact of a piece of content published today, but you can measure organic search visibility over time, branded search volume, direct traffic trends, and customer survey data on how people heard about you. These are imperfect proxies, but they are honest ones. Present them as leading indicators, not lagging proof.
For brand activity specifically, the most defensible approach is to track share of voice in your category alongside market share over time. There is a well-established relationship between the two, and while it won’t give you a precise ROI figure, it gives you a framework for having a grown-up conversation with the business about why brand investment matters. BCG’s work on brand strategy and go-to-market alignment is worth reading if you need to make that case internally.
Creator-led and social campaigns present a similar challenge. The reach is real, the engagement is real, but connecting it to revenue requires careful thinking about the full funnel, not just the activation moment. Thinking about creator campaigns as go-to-market tools, with a clear role in the funnel, makes forecasting them more tractable.
What to Do When Your Forecast Is Wrong
Your forecast will be wrong. The question is whether it is wrong in an informative way or a useless one.
When a forecast misses, the first instinct is usually to revise the model. That is often the wrong place to start. Start with the assumptions. Which assumption failed? Was it a conversion rate that came in lower than expected? An audience size that turned out to be smaller? A competitive response you didn’t anticipate? Understanding which assumption broke tells you something useful about the market and about your own knowledge gaps.
In my experience, the most common reason a forecast misses is that someone in the planning process knew the assumption was weak but didn’t say so. There is a social dynamic in most organisations where challenging a forecast feels like challenging the person who built it. The fix is structural: make assumption documentation a required part of the forecast process, not an optional add-on. When the assumptions are visible, challenging them is a normal part of the work, not a personal attack.
There is also a useful discipline in tracking your forecast accuracy over time. Not to punish people when they miss, but to understand where your models are systematically biased. If your paid search forecasts are consistently 20% high, that is a pattern worth investigating. If your content forecasts are consistently conservative, that is information too. Looking at how growth-oriented teams structure their measurement can surface approaches that improve forecast accuracy without adding complexity.
The Honest Conversation About Forecasting Limits
Marketing forecasting has real limits, and pretending otherwise does more damage than admitting them. Some of what marketing does is genuinely difficult to isolate. Brand equity builds slowly and pays out unpredictably. Word of mouth is real but hard to model. Competitive dynamics can shift your results without any change in your own activity.
The goal is not a perfect forecast. It is a forecast that is honest about its own uncertainty, built on assumptions that have been scrutinised, and useful enough to support better decisions. That is a lower bar than most organisations set, but it is the right one.
I have sat in rooms where marketing teams presented forecasts with two decimal places of precision on numbers that were, at best, educated guesses. The precision signals confidence. It also signals that nobody in the room is comfortable saying what they don’t know. The organisations that forecast well are the ones where saying “we’re not sure about this assumption” is treated as rigour, not weakness.
One thing worth noting: if your marketing forecast consistently requires heroic assumptions to justify the budget, the problem is sometimes not the forecast. Sometimes it is the product, the pricing, or the market position. Marketing is a blunt instrument when used to compensate for more fundamental business problems. The forecast is often where that truth first becomes visible, if you are willing to look at it honestly. Scaling marketing operations effectively requires that same honesty about what marketing can and cannot do on its own.
For teams working through the broader strategic context around forecasting, including how it connects to market selection, channel strategy, and growth planning, the Go-To-Market and Growth Strategy hub brings those threads together in one place.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
