Demand Forecasting: Stop Guessing, Start Planning

Demand forecasting is the process of estimating how much customer demand exists for a product or service over a given time period, using historical data, market signals, and structured assumptions to inform decisions about inventory, budget, headcount, and go-to-market timing. Done well, it gives leadership a working model of the future. Done badly, it gives leadership false confidence dressed up as analysis.

Most organisations do it badly. Not because they lack data, but because they confuse data volume with analytical rigour, and because the people building the forecasts are rarely the people who have to live with the consequences of getting them wrong.

Key Takeaways

  • Demand forecasting is only as useful as the assumptions underneath it. Surfacing those assumptions explicitly is more valuable than adding more data inputs.
  • Most businesses over-rely on historical trends and under-weight leading indicators, which means forecasts are accurate in stable conditions and useless when conditions change.
  • Lower-funnel signals like search volume and conversion data tell you about existing demand. They tell you almost nothing about demand you haven’t yet created.
  • A forecast is a planning tool, not a prediction. The goal is to reduce the cost of being wrong, not to achieve perfect accuracy.
  • The most dangerous forecast is one that everyone believes but nobody has stress-tested.

Why Most Demand Forecasts Fail Before They’re Finished

I spent the better part of a decade over-indexing on lower-funnel performance data. Conversion rates, cost per acquisition, return on ad spend. The numbers were clean, the attribution looked solid, and the forecasts built on top of them felt rigorous. The problem was that most of what we were measuring was demand that already existed. We were capturing intent, not creating it. When I eventually started asking harder questions about where new demand was actually coming from, the honest answer was: we didn’t really know.

That experience shaped how I think about forecasting now. The failure mode isn’t usually a technical one. It’s a framing one. Businesses build forecasts that describe the past in forward-looking language, and then treat those forecasts as if they represent something real about the future. They don’t. They represent a set of assumptions about continuity, and continuity is exactly what breaks down when markets shift, competitors move, or category dynamics change.

If you’re working through how demand forecasting fits into your broader planning process, it’s worth reading the wider thinking on go-to-market and growth strategy that sits behind this article. Forecasting in isolation is a spreadsheet exercise. Forecasting as part of a coherent growth model is something more useful.

What Does Demand Forecasting Actually Involve?

At its core, demand forecasting involves four things: identifying what you’re trying to forecast, gathering the right inputs, building a model that connects those inputs to an output, and then testing that model against reality over time. None of those steps are complicated in isolation. The difficulty is that most organisations skip step one entirely, rush through step two, over-engineer step three, and never get to step four.

What you’re forecasting matters enormously. Are you forecasting total addressable demand in a category? Your share of that demand? Demand for a specific product line? Demand in a specific channel? These are different questions with different data requirements and different implications for planning. Treating them as interchangeable is where most forecasting exercises start to go wrong.

The inputs fall into two broad categories: lagging indicators and leading indicators. Lagging indicators include historical sales data, past campaign performance, and seasonality patterns. They’re reliable in stable conditions and unreliable when anything structural changes. Leading indicators include search trend data, category growth signals, pipeline velocity, new customer acquisition rates, and market expansion signals. They’re noisier, harder to interpret, and far more useful for understanding where demand is actually heading.

Which Forecasting Methods Actually Work?

There is no single method that works in all contexts. Anyone selling you one is selling you a tool, not a solution. That said, there are a handful of approaches that are worth understanding.

Time series analysis uses historical data to project forward, applying adjustments for trend and seasonality. It works well for established products in stable markets and breaks down quickly in new categories or during periods of disruption. It’s the most commonly used method and probably the most commonly misapplied one.

Causal modelling attempts to identify the factors that drive demand and build relationships between those factors and the forecast output. Done well, it’s more strong than time series because it can accommodate structural changes. Done badly, it’s a sophisticated way of building a model that fits the past perfectly and predicts the future poorly. The classic trap is overfitting: adding variables until the model explains historical variance, at the expense of predictive validity.

Qualitative methods, including expert judgment, customer surveys, and sales force input, are consistently undervalued in organisations that have invested heavily in data infrastructure. They shouldn’t be. When I was growing a performance marketing agency from around 20 people to over 100, some of the most accurate demand signals we had came from conversations with clients about their own pipeline and growth plans. No algorithm was going to give us that. The data confirmed what the conversations had already told us.

Scenario planning isn’t a forecasting method in the traditional sense, but it’s arguably the most useful tool in the set. Rather than producing a single point estimate, scenario planning produces a range of plausible futures, each with different demand implications. It forces teams to articulate the assumptions underneath the forecast, which is the single most valuable thing a forecasting process can do. If you haven’t read BCG’s work on scaling planning processes in agile organisations, it’s worth your time. The principles apply well beyond software teams.

How Do You Build a Demand Forecast From Scratch?

If you’re starting from scratch, here is a sequence that works in practice, not just in theory.

Start by defining the unit of forecast. Are you forecasting revenue, volume, leads, or something else? Over what time horizon? At what level of granularity, by product, by channel, by geography? The more precisely you define what you’re forecasting, the more useful the output will be. Vague inputs produce vague outputs, and vague outputs produce bad decisions.

Next, gather your historical baseline. Three years of data is a reasonable minimum for identifying seasonal patterns. Five years starts to reveal structural trends. If you don’t have that data internally, you may be able to proxy it using category-level data from industry sources or search trend data from tools like SEMrush. SEMrush’s overview of growth tools covers some useful starting points for building category-level demand signals, even if growth hacking isn’t your primary frame.

Then, layer in your leading indicators. What signals exist in the market that tend to precede changes in demand? For B2B businesses, this might include hiring activity in target sectors, funding announcements, or regulatory changes. For consumer businesses, it might include search volume trends, social conversation volume, or retail traffic data. The point is to find signals that move before demand moves, not signals that confirm what you already know.

Build your model, but keep it simple. A model with three well-chosen inputs that you understand is worth more than a model with fifteen inputs that nobody can explain. Complexity in forecasting models usually reflects uncertainty about which variables matter, not sophistication in the analysis.

Document your assumptions explicitly. This is the step that almost nobody does and the step that almost always matters most. When the forecast turns out to be wrong, the question isn’t just “what did we miss?” It’s “which assumption failed, and why?” Without documented assumptions, you can’t learn from forecast errors in any systematic way. You just update the numbers and repeat the same mistakes.

Finally, build a review cadence. A forecast that isn’t reviewed against actuals is a planning document that nobody is accountable for. Monthly reviews at minimum. Quarterly deep dives that examine not just the variance, but the reasons for it.

What Data Sources Should You Use?

The right data sources depend on your category, your business model, and what questions you’re actually trying to answer. That said, there are some consistently useful inputs.

Internal sales data is the obvious starting point. It’s the most directly relevant data you have, and it’s usually the most underused. Most businesses know their total revenue. Fewer have clean data on demand by segment, channel, or product line over time. Getting that data into a usable state is often the most valuable thing a forecasting project produces, independent of the forecast itself.

Search data is a reliable proxy for existing demand in most categories. If people are searching for something, they want it. If search volume for a category is growing, demand is growing. If it’s declining, that’s a signal worth taking seriously. The limitation is that search data reflects demand that’s already been created. It tells you what people are looking for, not why they started looking, or what would make more of them look.

Pipeline and CRM data is underutilised in most marketing forecasting exercises, which tend to be built in isolation from sales. This is a mistake. Pipeline velocity, lead quality scores, and conversion rates by segment are some of the cleanest leading indicators available to most B2B businesses. Vidyard’s analysis of why go-to-market feels harder touches on the disconnect between marketing signals and commercial reality that makes this integration so important.

Third-party category data, where it exists, provides a useful external benchmark. Industry reports, analyst forecasts, and sector-level data can help you understand whether your demand trajectory is in line with the market or diverging from it. Divergence is always worth investigating. Forrester’s work on go-to-market challenges in specific verticals is a good example of the kind of category-level thinking that can sharpen a forecast.

How Do You Account for Demand You Haven’t Created Yet?

This is the question that most forecasting frameworks don’t answer well, and it’s the one that matters most for growth.

There’s a useful analogy I’ve used with clients for years. Think about a clothes shop. Someone who tries something on is far more likely to buy it than someone who walks past the window. The act of trying creates a different kind of consideration. The challenge for the shop is that if you only measure purchase intent among people who are already in the changing room, you’ll optimise relentlessly for that group and completely miss the opportunity to bring more people through the door in the first place.

Most demand forecasting has the same blind spot. It’s built around signals from people who are already in the consideration set. It tells you how much of the existing demand you’re likely to capture. It tells you almost nothing about how to expand the pool of people considering you at all.

To forecast latent or created demand, you need different inputs. Category penetration rates, total addressable market estimates, brand awareness data, and new-to-category customer acquisition rates are all more relevant than conversion rate data when the question is about growth rather than efficiency. SEMrush’s breakdown of growth examples illustrates how different the inputs look when you’re trying to understand demand creation versus demand capture.

The honest answer is that created demand is harder to forecast than captured demand, and anyone who tells you otherwise is oversimplifying. What you can do is build scenario models that incorporate different assumptions about market expansion, run controlled experiments to test those assumptions, and update your forecasts as the evidence accumulates. It’s less precise than a time series projection. It’s also far more useful for businesses that are trying to grow rather than just maintain.

How Do You Stress-Test a Demand Forecast?

The most dangerous forecast is one that everyone believes but nobody has challenged. I’ve sat in enough planning sessions to know that once a number goes into a slide deck, it acquires a kind of institutional authority that makes it very hard to question. The job of stress-testing is to question it before it becomes load-bearing.

There are a few useful approaches. The first is sensitivity analysis: vary your key assumptions and see how much the output changes. If a 10% change in one variable moves your forecast by 40%, that variable deserves a lot more attention than it’s probably getting. If the forecast is strong to a wide range of assumptions, that’s genuinely useful information.

The second is pre-mortem analysis. Before you finalise the forecast, ask the team to imagine it’s twelve months from now and the forecast was significantly wrong. What happened? What did we miss? This exercise surfaces assumptions that people hold privately but haven’t articulated, and it tends to produce more honest conversations than asking “does anyone see any problems with this?” in a room where the forecast was built by the person running the meeting.

The third is external benchmarking. How does your forecast compare to category growth rates? How does it compare to what your competitors are likely planning? If your forecast assumes you’ll grow significantly faster than the market, that’s not impossible, but it requires an explicit explanation of why. “We’ll execute better” is not an explanation. Specific advantages, specific mechanisms, specific evidence. That’s an explanation.

What Role Does Marketing Play in Demand Forecasting?

In most organisations, marketing is a consumer of demand forecasts rather than a builder of them. Finance builds the numbers, sales validates them, and marketing is handed a budget allocation that reflects the output. This is a mistake, and not just because it produces worse forecasts.

Marketing sits closest to the signals that matter most for understanding demand before it shows up in sales data. Brand health metrics, category interest data, content engagement patterns, and new audience acquisition rates are all marketing-owned inputs that can dramatically improve forecast quality. When marketing isn’t part of the forecasting conversation, those inputs don’t make it into the model.

There’s also a planning feedback loop that only works if marketing is involved upstream. If marketing doesn’t understand the demand assumptions that underpin the commercial plan, they can’t build programmes that test those assumptions or surface early warning signs when the assumptions are failing. They’re just executing against a brief with no visibility into whether the brief was built on solid ground.

The early part of my career was spent in agencies where we were handed briefs and asked to deliver against them. The best client relationships I had were the ones where we were in the room when the brief was being written, because that’s where the interesting questions lived. Demand forecasting is the same. The value isn’t just in the output. It’s in the conversation that produces it.

If you want to see how demand forecasting connects to broader commercial planning, the go-to-market and growth strategy hub covers the surrounding territory in more depth. Forecasting is one input into a larger system, and it works better when that system is coherent.

How Do You Communicate Forecast Uncertainty to Stakeholders?

This is where most forecasting processes break down in practice, not because the analysis is poor, but because the communication norms around forecasting in most organisations are deeply unhelpful.

Stakeholders typically want a number. A single, specific, defensible number that they can put in a plan and hold someone accountable to. The problem is that a single number implies a level of precision that no forecast can honestly claim. When the actual result diverges from the forecast, which it will, the conversation becomes about who got the number wrong rather than what the variance tells us about the underlying assumptions.

A more honest approach is to present forecasts as ranges with explicit confidence levels and documented assumptions. “Our central case is X, with a realistic range of Y to Z, and here are the three assumptions that drive most of the variance.” This is harder to put in a board presentation. It’s also far more useful for decision-making, because it makes the uncertainty visible rather than hiding it inside a false precision.

I’ve had this conversation with enough finance directors and CEOs to know that the initial reaction is often resistance. People want certainty, or at least the appearance of it. The way to bring them around is not to argue about epistemology. It’s to show them, over time, that the range-based forecast with explicit assumptions produces better decisions than the point estimate that nobody believes but everyone uses. That takes patience, but it’s worth it.

For teams thinking about how forecasting connects to go-to-market execution, Later’s go-to-market resources offer some practical framing on how demand signals translate into campaign planning, particularly for consumer-facing businesses.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between demand forecasting and sales forecasting?
Sales forecasting estimates what your business will sell, based on pipeline, conversion rates, and sales capacity. Demand forecasting estimates how much demand exists in the market for what you sell, regardless of whether your business captures it. The two are related but distinct. A business can have strong sales forecasts and still be missing significant demand it hasn’t reached. Understanding the gap between total demand and captured demand is one of the most useful things a forecasting process can produce.
How far ahead should a demand forecast look?
It depends on what decisions the forecast is meant to inform. Operational decisions around inventory and staffing typically need 30 to 90 day horizons. Budget and go-to-market planning usually requires a 12-month view. Strategic decisions about market entry, product development, or capacity investment may need a two to five year horizon. The further out the forecast, the wider the uncertainty range should be, and the more important it is to build scenario models rather than point estimates.
What is the most common mistake in demand forecasting?
The most common mistake is treating historical trends as a reliable guide to future demand without examining the assumptions that would need to hold for those trends to continue. Markets change, competitors move, and category dynamics shift. A forecast built entirely on extrapolating the past is accurate in stable conditions and wrong precisely when you most need it to be right. The fix is to surface and document the assumptions explicitly, and to build scenarios that test what happens when those assumptions fail.
Can small businesses forecast demand without sophisticated tools?
Yes. The tools matter far less than the discipline of the process. A small business with clean internal sales data, a working knowledge of search trend tools, and a habit of documenting assumptions can produce a more useful forecast than a large enterprise running an expensive forecasting platform without clear ownership or review processes. Start with what you know, be honest about what you don’t know, and build the review habit before you invest in the technology.
How do you forecast demand for a new product with no historical data?
Without historical data, you need to rely on analogues and leading indicators. Find the closest existing category or product and use its demand trajectory as a reference point, adjusting for the differences in your product, market, and timing. Use search volume data to size existing interest in the problem your product solves. Use customer research and early sales conversations to test willingness to pay and purchase intent. Build scenario models with explicit assumptions about adoption rates, and treat the forecast as a hypothesis to be tested rather than a projection to be defended.

Similar Posts