Demand Forecasting for New Products: Before You Spend a Penny

Demand forecasting for a new product is the process of estimating how much of that product the market will buy, over a defined period, before you have real sales data to work with. Done well, it gives you a defensible basis for budget decisions, channel selection, and go-to-market sequencing. Done badly, it gives you a spreadsheet that makes everyone feel confident right up until the moment it is wrong.

Most forecasts for new products are not forecasts at all. They are financial targets dressed up as market analysis. The difference matters enormously, and it is the first thing worth getting straight before you build a single model.

Key Takeaways

  • Most new product demand forecasts fail because they start with a revenue target and work backwards, rather than starting with evidence about real buyer behaviour.
  • Combining top-down market sizing with bottom-up channel modelling gives you two independent estimates, and the gap between them is where the real strategic thinking happens.
  • Proxy data, search volume, and competitor signals are more useful than primary research alone, especially when buyers do not yet know they want what you are selling.
  • Forecasting is not a one-time exercise. The first 90 days of a launch generate more signal than any pre-launch research, and your model should be built to absorb that data quickly.
  • The honest job of a demand forecast is not to predict the future accurately. It is to make your assumptions visible, so that when reality diverges from the model, you know exactly which assumption broke and why.

If you are working through a broader go-to-market build, the Go-To-Market and Growth Strategy hub covers the full range of decisions that sit around and beneath a demand forecast, from positioning and channel strategy to scaling and measurement.

Why Most New Product Forecasts Are Wrong Before They Start

I have sat in a lot of rooms where a demand forecast was presented. The number on the slide is usually confident. The methodology behind it is usually thin. And the people presenting it are often the same people whose bonuses depend on the product succeeding, which is not a great condition for objective analysis.

The most common failure mode is working backwards from a target. Someone decides the product needs to generate £5 million in year one. The forecast is then constructed to justify that number: a large addressable market, a modest penetration rate, a plausible average order value. Everything looks reasonable in isolation. The problem is that none of it is anchored to how buyers actually behave, what it costs to reach them, or how long it takes to convert them.

I saw this pattern repeatedly when I was running agency pitches and reviewing client briefs. The brief would say something like “we are targeting 2% of a £500 million market in year one.” When you asked how they arrived at 2%, the answer was almost always some version of “it seemed conservative.” Conservative relative to what? To the target? That is not forecasting. That is rationalisation.

A real demand forecast starts with evidence about buyer behaviour, not with a number you need to hit. That is a harder conversation to have internally, but it is the only one worth having.

What Data Can You Actually Use Before Launch?

The honest answer is that pre-launch data for a genuinely new product is always incomplete. You are forecasting something that has not happened yet, in a market that may not fully exist yet, for buyers who may not know they want it. That is the fundamental challenge. The question is not how to eliminate uncertainty, it is how to use the data you have access to as intelligently as possible.

There are four categories of data worth working through systematically.

Search and Intent Data

Search volume is one of the most underused inputs in demand forecasting. If people are already searching for the problem your product solves, that is a measurable proxy for existing demand. If they are not, that tells you something important about the education burden you are taking on. Tools like SEMrush give you a reasonable read on market penetration signals and the size of existing search demand in a category, which can anchor the top of your funnel model.

The nuance here is that search data captures expressed demand, not latent demand. Someone searching for “project management software for agencies” knows they have a problem and is actively looking for a solution. Someone who has never thought about the problem will not appear in that data at all. For products that create new categories, search volume will understate total opportunity significantly. For products entering established categories, it will be a more reliable signal.

Competitor and Proxy Data

If competitors exist, their publicly available signals are valuable. Pricing, packaging, review volume, hiring patterns, and funding rounds all tell you something about the size and growth rate of a market. A competitor that has raised three rounds and doubled headcount in two years is a reasonable signal that the market is real and growing. A category where the market leader has been flat for five years tells a different story.

Proxy markets are useful when direct comparisons do not exist. If you are launching a new type of B2B analytics tool, look at the adoption curves of adjacent tools that solved a similar type of problem for a similar type of buyer. The trajectory will not be identical, but it gives you a more grounded starting point than a blank sheet.

Primary Research (Used Carefully)

Surveys and interviews have a role, but they need to be interpreted with scepticism. Buyers consistently overstate their purchase intent in research settings. If 40% of survey respondents say they would “definitely” or “probably” buy your product, the actual conversion rate in market will be a fraction of that. Some practitioners apply a rule of thumb: take the “definitely would buy” percentage and divide by ten to get a rough real-world estimate. That is not precise, but it is a useful corrective against taking stated intent at face value.

Qualitative interviews are more useful than surveys for understanding the decision-making process: who is involved, what triggers a purchase, what objections arise, how long the cycle takes. That information feeds your funnel model more usefully than a percentage who say they would buy.

Internal Data and Analogues

If you have launched products before, your own historical data is the most reliable input you have. Conversion rates from trial to paid, average sales cycle length, churn in the first 90 days: these are real numbers from real buyers, and they should anchor your model wherever they are applicable. The question is always how closely the new product resembles the ones you have launched before, and where the analogy breaks down.

Top-Down and Bottom-Up: Run Both Models

There are two standard approaches to demand forecasting, and the right answer is to run both and then interrogate the gap between them.

Top-down starts with the total addressable market and applies a penetration rate to arrive at a revenue estimate. It is quick to construct and easy to communicate, which is why it gets used so often in board presentations. Its weakness is that penetration rates are almost always assumed rather than derived, and small changes in the assumed rate produce enormous swings in the output.

Bottom-up starts with your actual go-to-market capacity: how many channels you can activate, what conversion rates you expect at each stage, how much budget you have to deploy, and how long each sales cycle takes. It is more granular and more honest about constraints. Its weakness is that it can underestimate opportunity if you are too conservative about what is achievable.

When I was growing an agency from around 20 people to over 100, we ran both models for every significant new service line we launched. The top-down model told us whether the market was large enough to bother. The bottom-up model told us what we could realistically win given our actual capacity to sell and deliver. The gap between the two was always where the most useful conversations happened. If the bottom-up number was 20% of the top-down number, that was fine. If it was 2%, something was wrong with one of the models, and we needed to find out which.

For B2B products in particular, the bottom-up model tends to be more reliable, because the sales process is more defined and the buyer universe is more finite. If you are selling into a defined segment, say mid-market B2B financial services firms, you can often count the number of potential accounts, estimate realistic pipeline conversion, and build a credible revenue model from first principles. Resources like BCG’s analysis of financial services go-to-market strategy show how segmented approaches to defined buyer populations produce more actionable forecasts than broad market estimates.

How to Model the Funnel Before You Have Funnel Data

A demand forecast is only as useful as the funnel assumptions underneath it. You need to estimate, at minimum: how many people you can reach in your target market, what percentage of those will engage with your marketing, what percentage of engaged prospects will enter a sales process, and what percentage of those will convert to customers. Each of those rates needs a source, even if the source is an analogue from a comparable product or market.

One thing that has shaped how I think about this is the performance marketing trap. There is a tendency to build forecasts almost entirely around lower-funnel activity: paid search, retargeting, conversion optimisation. The logic is that these are the most measurable channels, so they are the easiest to model. But much of what performance marketing captures is demand that already existed. Someone who was going to buy anyway finds you through a search ad instead of organic, and the ad gets credited for the sale.

For a new product, that problem is compounded. If nobody knows the product exists, there is no existing demand to capture. The funnel has to start further back, with awareness and consideration, and those stages are harder to model precisely. That does not mean you skip them. It means you build assumptions for them, make those assumptions visible, and update them as soon as you have real data.

It is worth thinking about channel mix at this stage too. Different channels have very different cost structures and conversion characteristics. Pay per appointment lead generation, for example, has a very different economics profile from content-led inbound, and both will produce different funnel shapes. Your forecast needs to reflect the specific channels you are actually planning to use, not a generic industry average.

For products going into defined vertical markets, endemic advertising can be a useful channel to model separately, because it reaches buyers in a context where they are already thinking about the category. That tends to produce higher engagement rates than broad-reach channels, which affects your funnel assumptions materially.

What Your Website and Digital Presence Tell You About Demand Readiness

Before you finalise a demand forecast, it is worth doing an honest audit of your ability to convert the demand you are projecting. A forecast that assumes 5% of website visitors will start a trial is meaningless if your website is not built to support that conversion. I have seen forecasts built on ambitious funnel assumptions that fell apart the moment anyone looked at the actual user experience on the product page.

Running a structured review of your digital presence before launch is not optional. The checklist for analysing your company website for sales and marketing strategy is a practical starting point for identifying the gaps between where your conversion assumptions sit and where your current digital infrastructure actually performs.

This matters for forecasting because your conversion rates are not fixed inputs. They are outputs of how well your marketing and product experience work together. If you are forecasting 3% trial conversion but your current site converts at 0.8%, you either need to factor in the investment required to close that gap, or you need to revise your forecast. Both are legitimate choices. Ignoring the gap is not.

The Role of Assumptions Documentation

The most valuable thing a demand forecast can do is make its assumptions visible. Not buried in an appendix, but front and centre, so that everyone reviewing the forecast understands exactly what would have to be true for the numbers to be right.

I think about this in terms of what I would call assumption sensitivity. Which assumptions, if they are wrong by 20%, would break the model entirely? Which ones could be off by 50% and still leave the business case intact? Those are very different levels of risk, and a good forecast makes that distinction clear.

For a new product, the assumptions that tend to be most sensitive are: time to first sale, average sales cycle length, and early churn rate. These are also the assumptions that are hardest to get right without real data. That is not a reason to avoid making them. It is a reason to flag them explicitly and build a plan to validate them as quickly as possible after launch.

When I was judging the Effie Awards, one of the things that separated the strongest entries from the weaker ones was not the results, it was the clarity of the strategy that preceded the results. The best submissions could tell you exactly what they believed going in, what they were testing, and what they learned. That discipline is the same one that separates a useful demand forecast from a number on a slide.

Using the First 90 Days as a Calibration Period

No pre-launch forecast survives contact with real buyers unchanged. The question is not whether your assumptions will need updating, it is how quickly you can identify which ones are wrong and by how much.

The first 90 days of a launch generate more useful signal than any amount of pre-launch research. You will learn your actual cost per lead, your real conversion rate from trial to paid, how long the sales cycle actually takes, and which objections come up most often. All of that should flow back into your model immediately.

Build your forecast in a format that makes updating easy. If your model is a static spreadsheet that someone built once and never touched again, it will not get updated when the data changes. If it is a live model with clearly labelled assumption inputs, updating it as new data arrives is a natural part of the process rather than a political admission that the original forecast was wrong.

There is useful thinking on how growth teams can structure this kind of iterative approach in SEMrush’s analysis of growth hacking examples, particularly around how fast-moving teams build feedback loops between market activity and planning assumptions.

For B2B products, the calibration period also needs to include pipeline data, not just top-of-funnel metrics. If you are generating leads but they are not converting to qualified opportunities, the problem is either in the audience targeting or in the product-market fit, and those require very different responses. Tools like Vidyard’s research on untapped pipeline potential highlight how often revenue forecasts diverge from pipeline reality because the two are not being tracked against each other in real time.

When the Forecast Needs to Inform Bigger Decisions

Demand forecasting is not just a planning exercise. In certain contexts, it becomes a due diligence input, a board-level decision tool, or a factor in a commercial negotiation. The rigour required scales accordingly.

If you are forecasting demand as part of a business acquisition or investment decision, the methodology needs to be defensible to people who will actively look for weaknesses in it. That means documented sources for every major assumption, scenario modelling across at least three outcomes (base, upside, downside), and a clear articulation of the conditions under which the forecast would fail. The digital marketing due diligence framework is relevant here, particularly for understanding how historical digital performance can and cannot be extrapolated into a new product context.

For B2B technology companies specifically, demand forecasting often sits within a broader corporate and product marketing structure where multiple business units have competing claims on market opportunity. Getting the forecast right requires understanding how the product fits within the wider portfolio and what cannibalization risks exist. The corporate and business unit marketing framework for B2B tech companies is a useful reference for how to structure that conversation.

Similarly, if you are operating in financial services or another regulated sector, the demand forecast needs to account for compliance constraints on how you can reach and convert buyers. B2B financial services marketing has specific dynamics around buyer trust, sales cycle length, and procurement complexity that will materially affect your funnel assumptions if you do not build them in from the start.

Scaling a product’s go-to-market model introduces another layer of forecasting complexity. BCG’s work on scaling agile organisations is relevant here because the operational constraints on growth, team capacity, process maturity, and speed of iteration, all feed back into what a realistic demand trajectory looks like over a two to three year horizon.

The broader point is that a demand forecast for a new product is always a living document, not a one-time deliverable. The teams that treat it as a fixed number to be defended tend to be the ones who are most surprised when reality diverges from the plan. The teams that treat it as a model to be continuously refined tend to be the ones who catch problems early enough to do something about them.

There is more on the strategic decisions that sit around demand forecasting, including channel selection, audience segmentation, and growth model design, across the articles in the Go-To-Market and Growth Strategy hub.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is demand forecasting for a new product?
Demand forecasting for a new product is the process of estimating how much of that product the market will buy over a defined period, before real sales data exists. It combines market sizing, buyer behaviour research, funnel modelling, and competitor analysis to produce a defensible estimate that can guide budget, channel, and go-to-market decisions.
What is the difference between top-down and bottom-up demand forecasting?
Top-down forecasting starts with the total addressable market and applies a penetration rate to estimate revenue. Bottom-up forecasting starts with your actual go-to-market capacity, including channel conversion rates, budget, and sales cycle length, and builds up to a revenue estimate from there. Running both and comparing the gap between them is more useful than relying on either alone.
How reliable is survey data for forecasting new product demand?
Survey data consistently overstates purchase intent. Buyers say they would buy at much higher rates than they actually convert in market. Qualitative interviews are more useful for understanding the decision-making process, while stated intent data from surveys should be treated as directional rather than predictive. Applying a significant discount to stated intent figures before using them in a forecast is standard practice.
How do you forecast demand when there is no existing market for the product?
When no direct market exists, proxy markets are the most useful starting point. Look at adjacent products that solved a similar problem for a similar buyer, and examine their adoption curves. Search volume for the underlying problem, rather than the product category, can also indicate latent demand. what matters is to be explicit about where the analogy holds and where it breaks down, rather than treating proxy data as a direct read on your specific opportunity.
How often should a new product demand forecast be updated?
A demand forecast for a new product should be treated as a live model, not a fixed deliverable. The first 90 days of launch generate enough real data, actual cost per lead, conversion rates, sales cycle length, to warrant a material update to the model. After that, a quarterly review cycle is typical, with ad hoc updates whenever a significant assumption is proven wrong by market data.

Similar Posts