The Demand Waterfall Is Leaking. Here Is Where to Look

The demand waterfall is a framework for mapping how potential buyers move from initial awareness through to closed revenue. It gives go-to-market teams a shared vocabulary for tracking pipeline health, diagnosing where deals are stalling, and aligning sales and marketing around the same numbers. Done well, it turns a chaotic funnel into a legible system.

Done badly, it becomes a reporting exercise that makes everyone feel busy while the business quietly underperforms.

I have seen both versions. The difference usually comes down to whether the waterfall is being used to generate insight or to generate slides.

Key Takeaways

  • The demand waterfall only creates value when conversion rates between stages are tracked honestly, not reverse-engineered from targets.
  • Most waterfall models are biased toward lower-funnel stages because that is where attribution is easiest, not because that is where growth happens.
  • Stage definitions that differ between sales and marketing are the most common reason waterfall data becomes unreliable within six months of implementation.
  • Velocity, not just volume, is a critical diagnostic. Deals that take twice as long to close at the same conversion rate represent a real margin problem.
  • A waterfall that shows healthy conversion rates but flat revenue usually means the top of the funnel is too narrow, and the business is optimising a small pool rather than expanding it.

What Is the Demand Waterfall and Where Did It Come From?

The demand waterfall was originally developed by SiriusDecisions in 2006 as a way to model the B2B buying process in stages, from raw inquiry through to closed business. It was updated significantly in 2012 and again in 2017 to reflect changes in how buyers actually behave, particularly the rise of self-directed research and the blurring of the line between marketing-qualified and sales-qualified pipeline.

The core idea is simple. Demand does not arrive fully formed. It passes through stages, and at each stage some proportion of potential buyers drop out. If you can measure the conversion rate at each stage, you can identify where the system is inefficient and fix it. You can also work backwards from a revenue target to understand how much demand you need to generate at the top of the funnel to hit your number.

In its most common form, the waterfall includes stages roughly equivalent to: target universe, inquiries, marketing qualified leads, sales accepted leads, sales qualified leads, and closed revenue. The labels vary by organisation, and they should, because the definitions need to reflect how your specific business generates and converts demand.

What does not vary is the underlying logic: measure volume and conversion at every stage, track velocity through the system, and use the data to make decisions rather than decorate a dashboard.

Why Most Waterfall Models Fail Within a Year

Early in my career I was obsessed with lower-funnel performance. Conversion rates, cost per acquisition, return on ad spend. It felt like rigorous marketing because the numbers were clean and the attribution was relatively straightforward. It took me longer than I would like to admit to recognise that a lot of what I was crediting to performance activity was going to happen anyway. We were capturing demand that already existed, not creating new demand. The waterfall looked healthy because the bottom was converting well. The problem was the top was barely moving.

This is the most common failure mode I see in demand waterfall implementations. Teams optimise the stages they can measure most easily, which are almost always the stages closest to revenue, and they neglect the stages where growth actually comes from.

There are three other structural failure modes worth naming.

The first is definitional drift. When marketing and sales define MQL, SAL, and SQL differently, the waterfall becomes a fiction. I have sat in pipeline reviews where marketing was reporting strong MQL volume and sales was reporting weak pipeline, and both were technically correct because they were measuring different things. The fix is boring but necessary: written stage definitions, agreed by both functions, reviewed quarterly.

The second is reverse engineering. When targets are set at the revenue level and conversion rates are assumed rather than measured, teams work backwards to justify the input volume they need. The waterfall becomes a planning tool rather than a diagnostic one. It tells you what you want to believe, not what is actually happening.

The third is ignoring velocity. A waterfall that tracks volume and conversion but not time is missing a critical dimension. If your average deal cycle is lengthening, that has real implications for cash flow, forecasting accuracy, and the cost of carrying pipeline. I have seen businesses with apparently healthy conversion rates running into serious margin problems because the time from inquiry to close had quietly doubled over eighteen months.

Understanding these failure modes is part of building a go-to-market system that actually works. If you want a broader view of how demand waterfall thinking fits into commercial strategy, the Go-To-Market and Growth Strategy hub covers the full landscape, from market entry to pipeline architecture.

How to Define Waterfall Stages That Sales and Marketing Both Trust

The stage definitions are where most waterfall implementations either earn credibility or lose it. Get this wrong and everything downstream is unreliable.

Start with the handoff points rather than the stages themselves. The most consequential moment in the waterfall is when marketing passes a lead to sales. If that handoff is ambiguous, you will have marketing passing leads that sales ignores, and sales complaining about lead quality while marketing complains about follow-up. Both complaints are usually valid, which is what makes the conversation so frustrating.

A marketing qualified lead should be defined by observable behaviour and fit criteria, not by the volume of activity you need to hit your MQL target. I have seen MQL definitions that were essentially “anyone who downloaded a whitepaper,” which is not a definition, it is a vanity metric with a label on it. A useful MQL definition combines fit (does this person work at a company that could buy from us?) with intent (have they demonstrated behaviour that suggests they are in or approaching a buying process?).

A sales accepted lead is the moment sales acknowledges that the lead meets the agreed criteria and commits to working it. This stage is often skipped in waterfall models, which is a mistake. Without it, you have no way of knowing whether sales is actually reviewing the leads marketing passes, or whether they are sitting in a CRM queue being quietly ignored.

A sales qualified lead is the point at which a sales rep has had enough of a conversation with the prospect to confirm there is a real opportunity: budget, authority, need, and some sense of timeline. This is the stage most closely correlated with eventual revenue, which is why it is the most important conversion rate to track and the most dangerous one to game.

Once you have agreed definitions, document them in writing, make them accessible to both functions, and build them into your CRM so that stage progression requires the right criteria to be met. Then review them every quarter. Buying behaviour changes. Your definitions should too.

The Conversion Rates That Actually Matter

Every stage transition in the waterfall has a conversion rate. Not all of them carry equal diagnostic weight.

The conversion rate from inquiry to MQL tells you about the quality of your demand generation. If a large proportion of your inquiries are failing to meet MQL criteria, you are either generating the wrong kind of demand (wrong audience, wrong channels, wrong content) or your MQL definition is set too high relative to the volume you need.

The conversion rate from MQL to SAL tells you about the alignment between marketing and sales. A low SAL rate usually means one of two things: either marketing is passing leads that do not meet the agreed criteria, or sales is not reviewing leads promptly and consistently. Both are fixable, but they require different interventions.

The conversion rate from SAL to SQL tells you about the quality of the conversations sales is having. A low SQL rate can indicate that the leads are not as qualified as they appeared, that the sales approach is not landing, or that the product is not resonating with the audience being targeted.

The conversion rate from SQL to closed revenue is the one everyone watches most closely, but it is often the least actionable at a systemic level. By the time a deal reaches SQL, most of the upstream decisions have already been made. If your close rate is low, the answer is rarely “better closing.” It is usually something earlier in the process: wrong audience, wrong positioning, wrong qualification criteria.

BCG’s work on commercial transformation in go-to-market strategy makes a related point about where revenue growth actually comes from. The instinct to optimise what is already converting is understandable, but the bigger opportunity is almost always in expanding the addressable pool and improving upstream quality, not squeezing more from the bottom of the funnel.

How to Use the Waterfall for Revenue Planning Without Fooling Yourself

One of the legitimate uses of the demand waterfall is working backwards from a revenue target to understand what you need to generate at each stage. If you need to close £5 million in new business and your average deal size is £50,000, you need 100 closed deals. If your SQL-to-close rate is 40%, you need 250 SQLs. If your SAL-to-SQL rate is 60%, you need roughly 417 SALs. And so on up the funnel.

This is useful. It makes the demand generation requirement concrete and gives both marketing and sales a shared view of what the pipeline needs to look like at every stage to hit the number.

The trap is using assumed conversion rates rather than measured ones. I have seen planning processes where the conversion rates were essentially chosen to make the model work rather than derived from historical data. When that happens, the plan looks coherent on paper and falls apart in execution because the assumptions were never valid.

If you do not have reliable historical conversion rate data, say so. Use ranges rather than point estimates. Build in explicit assumptions and review them quarterly as actuals come in. Honest approximation is more useful than false precision. I would rather work with a plan that acknowledges its uncertainty than one that presents a single number with unwarranted confidence.

Forrester’s analysis of go-to-market struggles in complex industries highlights a common pattern: organisations that plan from the top down, imposing conversion rate assumptions rather than measuring them, consistently struggle to forecast accurately and to identify where their pipeline is actually breaking down.

The Top-of-Funnel Problem Nobody Wants to Talk About

There is a version of the demand waterfall that looks perfectly healthy and still produces disappointing revenue growth. Conversion rates are solid. Velocity is reasonable. The pipeline is moving. But the business is not growing.

Almost always, the problem is at the top. The addressable pool is too small, and the business is recycling the same audience through the funnel rather than expanding into new demand.

Think of it this way. If you are only reaching people who are already aware of your category and actively looking for a solution, you are competing for a fixed pool of intent. You can get better at capturing that intent, and you should, but you cannot grow the pool by optimising conversion rates. You grow it by reaching people who are not yet in the market, building familiarity and preference before they have a defined need, so that when they do enter the buying process, you are already on their shortlist.

I think about this in terms of a simple analogy. Someone who has already tried on a jacket in a shop is far more likely to buy it than someone who has never seen it. The question is not just how to convert the people who are already trying things on. It is how to get more people into the shop in the first place. That is a brand and awareness problem, not a conversion problem, and the demand waterfall will not show it to you unless you are measuring the size of your addressable universe and tracking how much of it you are actually reaching.

Semrush’s analysis of market penetration strategy is relevant here. Penetration growth requires reaching new audiences, not just improving the efficiency of existing demand capture. The two are not in competition, but they require different investments and different timeframes.

Integrating the Waterfall With Your Broader GTM System

The demand waterfall does not exist in isolation. It is one component of a broader go-to-market system, and its value depends on how well it connects to the other parts of that system.

On the demand generation side, the waterfall needs to connect to your channel strategy and your audience targeting. If you are generating inquiries from the wrong segments, the waterfall will tell you that MQL conversion is low, but it will not tell you why unless you are tracking segment data alongside stage data.

On the sales side, the waterfall needs to connect to pipeline management and forecasting. A CRM that does not enforce stage definitions consistently will produce waterfall data that looks precise but is actually a mix of different people applying different criteria at different times.

On the product side, the waterfall can surface patterns that are actually product signals. If SQL-to-close rates are consistently lower for a particular use case or segment, that might mean the product is not solving the problem well enough for that audience, not that the sales process is broken.

When I was running iProspect and we were growing the team from around 20 people toward 100, one of the things that forced clarity was building a shared pipeline view that both the commercial team and the delivery team could see. It was a crude version of a demand waterfall, but it meant that everyone understood where revenue was coming from, what was at risk, and what needed to happen upstream to keep the business growing. The discipline of tracking the stages honestly, rather than optimistically, was what made it useful.

BCG’s research on go-to-market strategy in financial services makes a point that applies broadly: the organisations that get the most value from structured pipeline frameworks are the ones that use them to surface uncomfortable truths, not the ones that use them to validate existing assumptions.

For teams building or rebuilding their pipeline architecture, the broader context matters as much as the framework itself. The Go-To-Market and Growth Strategy hub covers how demand generation, commercial planning, and channel strategy fit together into a coherent system, rather than a collection of separate tactics.

What Good Waterfall Reporting Actually Looks Like

Good waterfall reporting is not a weekly slide with green arrows. It is a regular, honest conversation between marketing and sales about where the pipeline is healthy, where it is not, and what is being done about it.

The reporting cadence matters. Monthly is usually the right frequency for waterfall reviews at a strategic level. Weekly pipeline reviews tend to be too granular to show meaningful trends, and quarterly is too slow to catch problems before they affect revenue.

The metrics that belong in a waterfall report are: volume at each stage, conversion rate at each stage transition, average time in stage, and trend direction for each of those metrics over the last three to six months. That last point is important. A single data point tells you where you are. A trend tells you where you are going.

Cohort analysis adds another layer of usefulness. Rather than looking at all pipeline as a single pool, cohort analysis tracks groups of leads that entered the funnel in the same period and follows them through to resolution. This makes it much easier to understand whether a change in conversion rate is a real trend or a timing artefact.

Tools like Hotjar can contribute useful behavioural data to the upper stages of the waterfall, particularly around how prospects are engaging with content and where they are dropping off before they convert to an inquiry. That kind of behavioural feedback loop sits upstream of the formal waterfall stages but informs them directly.

The most important thing about waterfall reporting is that it should generate decisions, not just observations. If you leave a pipeline review without a clear view of what is being changed and why, the report has done its job as a document but failed as a management tool.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the demand waterfall in B2B marketing?
The demand waterfall is a framework for tracking how potential buyers move through defined stages from initial awareness or inquiry through to closed revenue. It gives marketing and sales a shared model for measuring pipeline health, diagnosing conversion problems, and planning the demand generation activity needed to hit revenue targets. The original model was developed by SiriusDecisions and has been updated several times to reflect changes in B2B buying behaviour.
What are the main stages in a demand waterfall?
The most common stages are: target universe (the total addressable market), inquiry (anyone who has engaged with your brand in a trackable way), marketing qualified lead (prospects who meet defined fit and intent criteria), sales accepted lead (leads that sales has reviewed and committed to working), sales qualified lead (opportunities confirmed by a sales conversation), and closed revenue. The exact labels and definitions vary by organisation and should be tailored to reflect how your business actually generates and converts demand.
Why do marketing and sales disagree about lead quality in waterfall models?
The most common reason is that marketing and sales are using different definitions for the same stage labels. If an MQL means one thing to marketing and something slightly different to sales, both teams can be technically correct in their reporting while the pipeline data becomes unreliable. The fix is written stage definitions agreed by both functions, enforced in the CRM, and reviewed regularly. Without that shared definition layer, waterfall data reflects different people applying different judgements rather than a consistent measurement of the same thing.
How do you use the demand waterfall for revenue planning?
Start with your revenue target and work backwards using measured conversion rates at each stage. If you need 100 closed deals and your SQL-to-close rate is 40%, you need 250 SQLs. Apply the same logic up through each stage to arrive at the inquiry volume required. The critical discipline is using actual measured conversion rates rather than assumed ones. If you do not have reliable historical data, use ranges and document your assumptions explicitly so they can be tested and revised as actuals come in.
What does it mean when waterfall conversion rates look healthy but revenue is flat?
It usually means the top of the funnel is too narrow. If conversion rates through the waterfall are solid but the business is not growing, the most likely explanation is that the addressable pool is not expanding. The business is efficiently converting a fixed audience rather than reaching new demand. This is a brand and awareness problem, not a conversion problem. The solution is investment in upper-funnel activity that builds familiarity with audiences who are not yet in an active buying process, expanding the pool that eventually flows into the waterfall.

Similar Posts