Pipeline Forecasting Is Broken. Here’s How to Fix It
Pipeline forecasting is the process of estimating how much revenue a sales team will close within a given period, based on the deals currently in play and their likelihood of converting. Done well, it gives leadership a credible view of future revenue so they can make confident decisions about hiring, spend, and growth. Done badly, it produces numbers that feel precise but mean almost nothing.
Most organisations are doing it badly. Not because they lack data, but because they confuse the appearance of rigour with the thing itself.
Key Takeaways
- Pipeline forecasting fails most often not from a lack of data, but from false precision: treating estimates as facts and CRM hygiene as a proxy for commercial judgement.
- Stage-based probability weighting is widely used and widely misleading. A deal at 70% is not 70% likely to close. It means a rep moved it to stage four.
- Forecasting accuracy improves when you separate what you know from what you are assuming, and track the difference over time.
- Marketing and sales alignment on pipeline definitions is not a soft cultural goal. It directly affects the quality of revenue forecasts.
- The most useful forecast is an honest approximation with visible assumptions, not a single confident number with hidden uncertainty baked in.
In This Article
- Why Most Pipeline Forecasts Are Fiction With a Spreadsheet Behind Them
- What Accurate Forecasting Actually Requires
- The Marketing and Sales Alignment Problem Nobody Prices In
- Sector Differences That Change How You Should Forecast
- The False Precision Trap and Why It Matters More Than You Think
- Common Forecasting Mistakes That Persist Despite Being Well-Known
- What Good Pipeline Forecasting Looks Like in Practice
Pipeline forecasting sits at the intersection of sales operations, marketing performance, and commercial strategy. If you want a broader view of how these functions connect, the Sales Enablement & Alignment hub covers the full picture, from lead generation through to closed revenue.
Why Most Pipeline Forecasts Are Fiction With a Spreadsheet Behind Them
I have sat in more pipeline reviews than I care to count. In agencies, in client-side roles, in board meetings where the CFO was asking politely but the subtext was clear. And the pattern I kept seeing was the same: a CRM-generated number presented with unearned confidence, followed by a conversation that revealed the number was largely aspirational.
The problem is structural. Most pipeline forecasts are built on stage-based probability weighting, where each deal stage is assigned a percentage. A deal in discovery might be weighted at 20%. A deal with a proposal out might be 50%. A verbal commitment might be 80%. The CRM multiplies the deal value by the probability and sums everything up. The result looks like a forecast. It is not a forecast. It is a weighted list of guesses with arithmetic applied to it.
The percentage assigned to each stage does not reflect the actual historical close rate of deals at that stage. It reflects a convention someone chose when the CRM was set up, probably years ago, probably without data to support it. And once those percentages are in the system, they tend to stay there indefinitely, regardless of what the actual conversion data shows.
There is also a subtler problem. Reps move deals through stages based on their own optimism, their relationship with the prospect, and sometimes their desire to look like their pipeline is healthy. A deal at stage four is not 70% likely to close. It means a rep decided to move it to stage four. Those are very different things.
What Accurate Forecasting Actually Requires
Accurate forecasting requires three things that most organisations underinvest in: historical conversion data by stage, honest deal qualification, and a clear separation between what you know and what you are assuming.
Historical conversion data is the foundation. If you do not know what percentage of deals at each stage actually closed over the last 12 months, your probability weightings are invented. Pull the data. It will almost certainly show that your stage-four probability should be lower than you think, and that certain deal types or segments convert at very different rates than others. A SaaS sales funnel, for instance, tends to have very different conversion dynamics than a manufacturing or professional services pipeline. Applying the same probability model across both is a category error.
Deal qualification is where the commercial judgement lives. MEDDIC, BANT, SPICED, any of the established frameworks will do the job if they are actually used rather than treated as a checkbox exercise. The question is whether the rep can answer: who is the economic buyer, have we spoken to them, is there a confirmed budget, is there a real deadline, and what happens if the prospect does nothing? If those questions cannot be answered cleanly, the deal does not belong in the forecast at the same weight as one where they can.
Separating what you know from what you are assuming is the discipline that most forecasting processes skip entirely. A useful forecast does not just show a number. It shows the assumptions behind the number, and it flags where those assumptions are thin. If two of your top five deals are contingent on a procurement sign-off that has been delayed three times, that should be visible in the forecast, not buried inside a probability percentage.
I spent time working with a client whose sales team was consistently forecasting 40% above what they actually closed. The CRM data looked healthy. The pipeline reviews were thorough. But when we traced the gap, it came down to one thing: deals were being entered into the pipeline at the point of first meeting rather than at the point of confirmed need. The pipeline was full of conversations, not opportunities. Fixing the entry criteria alone, without changing anything else, brought the forecast within 12% of actual within two quarters.
The Marketing and Sales Alignment Problem Nobody Prices In
Pipeline forecasting is usually treated as a sales operations problem. Marketing generates leads, sales converts them, and forecasting is what happens downstream. That framing is part of why forecasts are unreliable.
If marketing and sales are not using the same definition of a qualified opportunity, the pipeline will always be polluted. Marketing will report on MQLs using one set of criteria. Sales will accept some of those leads and quietly discard others. The ones that get discarded will not always make it back into the reporting. The result is a pipeline that looks fuller than it is, with a conversion rate that looks worse than it should be, and a forecast that is wrong for reasons nobody can clearly articulate.
This is not a new observation. There is a reason the benefits of sales enablement are so consistently tied to revenue outcomes rather than activity metrics: when marketing and sales operate from the same definitions, the whole system becomes more legible, and forecasting becomes more honest.
The practical fix is a shared pipeline entry definition, agreed in writing, that both teams are accountable to. Not a marketing SLA and a separate sales SLA, but a single definition of what constitutes a pipeline opportunity. What information must be known. What must have been confirmed. What cannot be assumed. This sounds like process hygiene. It is actually a forecasting accuracy intervention.
It is also worth acknowledging that lead quality varies significantly by source, sector, and buyer type. The lead scoring approaches used in higher education illustrate this well: different buyer journeys require different qualification signals, and applying a single scoring model across all of them produces noise rather than signal. The same logic applies to any organisation with multiple buyer segments or product lines.
Sector Differences That Change How You Should Forecast
Pipeline forecasting is not a one-size-fits-all discipline. The dynamics that matter in a SaaS business with a 30-day sales cycle are genuinely different from those in a manufacturing business with a 9-month procurement process.
In longer-cycle B2B environments, the biggest forecasting risk is not that deals fall out late. It is that deals stall in the middle and stay in the pipeline for months without meaningful progress. Reps do not want to lose the deal, so they keep it active. Managers do not want to cut the pipeline, so they accept the optimism. The forecast carries a growing tail of zombie opportunities that will never close but never formally die either.
In manufacturing sales environments, this problem is particularly acute because the stakeholder landscape is complex, procurement timelines are often outside the buyer’s direct control, and deals can be technically alive but commercially frozen. A forecasting model that does not account for deal age alongside deal stage will consistently overstate pipeline health in these environments.
A simple fix is to add a time-in-stage flag to your pipeline reporting. Any deal that has been in the same stage for longer than your average sales cycle for that stage should be automatically flagged for review. Not automatically removed, but reviewed. The discipline of reviewing stalled deals regularly, rather than waiting for a quarterly pipeline clean-up, keeps the forecast honest in a way that stage-based probability weighting cannot.
The False Precision Trap and Why It Matters More Than You Think
One of the things I noticed when judging the Effie Awards was how often the most effective campaigns were the ones built on honest commercial thinking rather than sophisticated measurement theatre. The teams that won were not the ones with the most elaborate attribution models. They were the ones who understood what they were trying to achieve, why, and how they would know if it was working.
The same principle applies to forecasting. A forecast that presents a single number, £4.2 million for Q3, with no indication of the assumptions behind it or the range of plausible outcomes, is not more useful than a forecast that says “between £3.4 million and £4.8 million, with the midpoint dependent on three deals that are currently at verbal commitment but not yet contracted.” The second version is less tidy. It is considerably more honest and considerably more useful for decision-making.
False precision in forecasting is not just an accuracy problem. It is a trust problem. When a sales leader consistently forecasts £4 million and delivers £2.8 million, the organisation stops trusting the forecast. They apply their own informal discount factor. The forecast becomes a starting point for negotiation rather than a planning tool. The entire purpose of the exercise is undermined.
There is a good body of thinking on this from a strategic perspective. BCG’s work on commercial strategy consistently emphasises the importance of decision quality over analytical sophistication: the goal is not to produce the most detailed analysis but to make better decisions faster. That principle applies directly to pipeline forecasting. A simpler, more honest forecast that decision-makers actually trust will produce better outcomes than a complex one they have learned to discount.
The Forrester perspective on marketing effectiveness makes a related point about the relationship between measurement and credibility: when your numbers consistently fail to predict outcomes, the numbers lose authority. Forecasting is no different. Accuracy builds trust. Trust enables better planning. Better planning drives better outcomes.
Common Forecasting Mistakes That Persist Despite Being Well-Known
There are several forecasting errors that I see repeatedly, across industries, across company sizes, and across teams that should know better by now.
The first is treating CRM completeness as a proxy for forecast quality. A fully populated CRM with clean data in every field does not mean the forecast is reliable. It means the data entry is good. Those are not the same thing. I have seen immaculate CRMs that produced terrible forecasts because the underlying deal qualification was weak and the stage definitions were inconsistently applied.
The second is forecasting from pipeline coverage ratios without interrogating the quality of the coverage. The convention that you need 3x or 4x pipeline coverage to hit your number is a useful heuristic, but it is only useful if the pipeline is genuinely qualified. If your pipeline is full of early-stage conversations and zombie deals, 4x coverage will not save you.
The third is not separating new business from expansion and renewal. These have different conversion rates, different timelines, and different risk profiles. Blending them into a single pipeline number obscures the composition of your forecast and makes it harder to identify where the real risk sits.
It is worth noting that many of these errors are sustained by assumptions that go unexamined for years. The sales enablement myths that persist in organisations often have a direct effect on forecasting quality, particularly the belief that more pipeline activity automatically translates to more revenue. It does not. Pipeline velocity and pipeline quality are different variables, and conflating them produces forecasting errors that compound over time.
The fourth mistake is not tracking forecast accuracy over time. If you do not measure how close your forecasts were to actual outcomes, you cannot improve. Most organisations review the forecast weekly but review forecast accuracy never. Build a simple tracking table: forecast at the start of the quarter, actual at the end, variance, and a brief note on what drove the variance. Do this for four quarters and patterns will emerge that are worth more than any forecasting methodology.
What Good Pipeline Forecasting Looks Like in Practice
Good pipeline forecasting is not complicated. It requires discipline more than sophistication.
It starts with clean pipeline entry criteria that are applied consistently. Every opportunity in the pipeline should meet the same minimum standard of qualification. If it does not, it should be in a separate pre-pipeline stage so it does not inflate the headline number.
It requires probability weightings that are grounded in actual historical conversion data, reviewed at least annually, and differentiated by segment or deal type where the data supports it. If your enterprise deals convert at 22% from stage three and your mid-market deals convert at 41%, those should not carry the same probability weighting.
It benefits from a simple deal scoring layer that captures the strength of qualification on each opportunity, separate from the stage-based probability. A deal at stage four with a confirmed economic buyer, a signed NDA, and a verbal commitment from the CFO is not the same as a deal at stage four where the rep has met the procurement team twice and has a good feeling about it. Both might be at 70% in the CRM. They should not be treated identically in the forecast.
It requires the forecast to be presented with visible assumptions. Which deals are carrying the number? What has to happen for each of them to close? What is the realistic downside scenario if two of the top three deals slip? These are not pessimistic questions. They are the questions that allow leadership to make contingency plans rather than being surprised.
Good forecasting also requires the right supporting materials to be in place. When deals stall, it is often because the prospect does not have what they need to make a decision internally. The quality of sales enablement collateral directly affects deal velocity, and deal velocity directly affects forecast reliability. A pipeline full of stalled deals is a forecasting problem with a content solution.
I grew an agency from 20 people to over 100 during a period when new business forecasting was genuinely difficult. The market was shifting, buying cycles were lengthening, and the old heuristics were not holding. What we found was that the teams with the most accurate forecasts were not the ones with the best CRM hygiene. They were the ones who had the most honest conversations about deal risk. They would say in a pipeline review: this deal is in the forecast but the economic buyer changed last month and we have not re-established the relationship. That kind of candour, which takes a culture that does not punish it, is worth more than any forecasting tool.
For a broader view of how sales enablement functions connect to commercial performance, the Sales Enablement & Alignment hub covers the frameworks, tools, and thinking that sit behind effective revenue operations.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
