Pipeline Reviews That Use Your Analytics Data

A pipeline review using analytics means examining your marketing and sales data at each stage of the funnel to identify where leads are progressing, where they are stalling, and what the data suggests you should do differently. Done well, it turns a status meeting into a decision-making session. Done badly, it becomes a slide deck of vanity metrics that nobody acts on.

Most pipeline reviews fall into the second category. Not because the data is missing, but because the questions being asked of it are wrong.

Key Takeaways

  • A pipeline review is only as useful as the questions you bring to it. Start with the business problem, not the dashboard.
  • Stage-by-stage conversion rates are more diagnostic than total pipeline volume. Where leads drop tells you more than how many you have.
  • Velocity matters as much as volume. A deal sitting in stage two for 60 days is a different problem from one that moved through in a week.
  • Analytics tools give you a version of reality, not reality itself. Corroborate data points before acting on them.
  • The output of a pipeline review should be a short list of decisions, not a longer list of things to monitor.

Why Most Pipeline Reviews Are a Waste of Time

I have sat in a lot of pipeline reviews over the years. At agencies, with clients, and in leadership teams where the meeting was technically about the pipeline but functionally about reassuring each other that things were fine. The format was usually the same: someone shared a CRM screen, the team talked through each deal, and the meeting ended with a to-do list that nobody completed before the next one.

The problem was not the data. Most organisations have more data than they can process. The problem was the absence of a structured analytical approach. Without one, a pipeline review defaults to storytelling. Each deal gets a narrative. The narrative is usually optimistic. And the underlying patterns that would actually tell you something useful go unexamined.

Analytics changes that, but only if you use it deliberately. The goal is not to add a GA4 screenshot to your existing slide deck. The goal is to replace the storytelling with a set of diagnostic questions that the data can answer.

If you want to build a stronger foundation for this kind of review, the Marketing Analytics hub at The Marketing Juice covers the measurement principles, tool choices, and analytical frameworks that sit underneath a well-run pipeline process.

What Data Should You Actually Be Looking At?

There is a version of this question that gets answered with a long list of metrics. I am going to resist that, because long lists of metrics are how you end up with a 47-slide deck and no decisions.

The data that matters in a pipeline review falls into three categories: volume, velocity, and conversion.

Volume tells you how much is in the pipeline at each stage. Velocity tells you how fast things are moving through. Conversion tells you the percentage making it from one stage to the next. You need all three. Any one of them in isolation is misleading.

A pipeline with high volume but low velocity is not a healthy pipeline. It looks impressive on a dashboard and feels reassuring in a meeting, but what it often means is that leads are accumulating because nobody is closing them. I saw this pattern repeatedly when I was running agency new business. We had a full pipeline on paper. In practice, half the deals had been sitting at proposal stage for three months because we had not done the follow-up work to move them forward.

Conversion rates by stage are the most diagnostic metric in the set. They tell you precisely where the pipeline is breaking down. If 60% of leads make it from initial contact to discovery call, but only 15% make it from discovery call to proposal, that is where your attention needs to go. Not at the top of the funnel, not at the close rate, at that specific transition.

The challenge is connecting your marketing analytics to your CRM data cleanly enough to see these transitions. GA4 can track a significant portion of the pre-CRM experience, but once a lead enters a sales process, you typically need your CRM to carry the data forward. The gap between those two systems is where most pipeline analytics fall apart.

How Do You Structure the Review Itself?

Structure matters more than most people think. A pipeline review without a structure is just a meeting about the pipeline. A structured review is a decision-making process that happens to use pipeline data as its input.

Here is the structure I have used and refined over a long time in agency and client-side settings.

Step One: Define the Review Period and the Baseline

Before you look at a single number, agree on the time window. This sounds obvious. It is frequently skipped. Without an agreed period, different people in the room are looking at different data, and the conversation becomes incoherent.

For most businesses, a four-week rolling review makes sense. For longer sales cycles, six to eight weeks. The baseline is your previous period’s conversion rates and velocity figures. You are not looking at numbers in isolation. You are looking at movement.

If your stage-two to stage-three conversion was 32% last period and it is 19% this period, that is a signal. If it is 34%, that is also a signal, but a different one. Without the baseline, you have no way to distinguish signal from noise.

Step Two: Pull the Stage-by-Stage Conversion Data

This is where your CRM and your analytics platform need to be doing the same job. In practice, they rarely are without some configuration work upfront.

If you are using GA4 as part of your analytics stack, the funnel exploration report is the right place to start for the pre-CRM portion of the experience. It lets you define each step in the funnel and see exactly where users are dropping out. Understanding how organic and paid traffic behaves differently at each funnel stage is particularly useful here, because the drop-off patterns are often channel-specific.

The discipline is to resist the temptation to explain away the drop-offs before you have quantified them. I have been in rooms where someone would immediately offer a reason for a poor conversion rate before anyone had established how poor it actually was. Quantify first. Explain later.

Step Three: Measure Velocity at Each Stage

Average time in stage is a metric that most CRMs can produce but most teams do not look at regularly. They should.

When I was managing a team of account managers at an agency, we had a deal that had been sitting at “verbal agreement” stage for eleven weeks. Nobody had flagged it because the volume numbers looked fine. It was only when we started tracking velocity that we realised it had effectively stalled. The client had moved on internally and nobody had told us.

Velocity data surfaces those situations before they become write-offs. Set a threshold for each stage. If something sits in stage two for longer than three weeks, it gets a flag. That flag means someone has to make a decision about it, not just note it for the next meeting.

This is also where integrating behavioural analytics tools alongside your CRM data adds real value. Tools like Hotjar used in combination with GA4 can show you whether leads who are stalling are still engaging with your content. If they are visiting pricing pages and reading case studies, they are still active. If they have gone quiet across all touchpoints, the deal is probably dead and you should treat it accordingly.

Step Four: Segment the Pipeline by Source

Not all pipeline is equal, and aggregate conversion rates hide more than they reveal. A blended close rate of 22% might look acceptable until you break it down and find that inbound organic leads close at 38% while paid leads close at 9%.

That is not a hypothetical. It is a pattern I have seen consistently across different industries and business models. The source of a lead has a significant bearing on how it behaves in the pipeline, and if you are not segmenting by source in your review, you are making decisions based on averages that do not reflect the reality of any individual channel.

The segmentation that typically matters most: organic search versus paid search, direct versus referral, content-driven versus outbound. If you have the data to go deeper, segment by campaign or keyword cluster. Preparation in web analytics means knowing which questions you need to answer before you pull the data, not after.

The output of this step should be a clear view of which sources are producing pipeline that converts and which are producing volume that flatters your top-of-funnel numbers without contributing to revenue.

Step Five: Identify the Constraint

Every pipeline has a constraint. One stage where the conversion rate is materially worse than the others, or where velocity slows disproportionately. The analytical job of a pipeline review is to find that constraint and focus attention on it.

This is where I would push back on the instinct to try to fix everything at once. In most of the agency turnarounds I have been involved in, the temptation was to run twelve improvement initiatives simultaneously. The result was usually that none of them got the focus they needed and the pipeline metrics barely moved.

Identify the single stage where improvement would have the greatest downstream impact. Fix that first. Measure the effect. Then move to the next constraint. It is slower in theory and faster in practice.

If you are using a more sophisticated analytics stack, this is where tools that go beyond standard GA4 reporting can be worth the investment. Heap, for example, captures behavioural data retroactively in ways that GA4 does not, which means you can go back and analyse the behaviour of leads that converted versus those that did not, even if you did not set up the tracking in advance.

Step Six: Produce Decisions, Not Observations

This is the step that most pipeline reviews skip entirely. They produce observations. “Our stage-three conversion is down.” “Velocity has slowed in the enterprise segment.” “Paid leads are not closing as well as organic.”

Those are observations. They are not decisions. A decision is: “We are pausing the paid search budget for enterprise leads until we understand why the close rate is 9% and we have a hypothesis about how to fix it.” Or: “We are adding a mandatory follow-up sequence for any deal that sits in stage two for more than two weeks.”

The output of a pipeline review should be a list of three to five decisions with an owner and a date. Not a list of things to monitor. Monitoring without deciding is just watching the problem get worse with better visibility.

One of the lessons I took from judging the Effie Awards is that the work that wins is almost always built on a clear, specific decision made early in the process. The teams that produced the most effective campaigns were not the ones with the most data. They were the ones who had used their data to make a clear call and then committed to it.

What About the Tools?

The tools matter less than the process, but they do matter. The combination of GA4 for web behaviour, a CRM for pipeline stage tracking, and a data visualisation layer to connect them is the practical minimum for a meaningful pipeline review.

If you are using video content as part of your marketing, integrating video engagement data into your pipeline view is worth considering. Wistia’s GA4 integration lets you see how video consumption correlates with pipeline progression, which can be genuinely useful if you are using case study videos or product walkthroughs as part of your nurture process.

For teams that want to go beyond GA4 into more granular product or behavioural analytics, Mixpanel offers a different approach to event-based tracking that some teams find more flexible for pipeline-adjacent analysis. Whether it is worth the additional complexity depends on your sales cycle length and the volume of data you are working with.

The warning I would give about any analytics tool in this context comes from something Forrester has written about regarding black-box analytics: if you cannot explain how a number is being calculated, you should not be making decisions based on it. That applies to CRM-generated pipeline scores, attribution models, and any predictive feature your analytics platform is surfacing. Understand the mechanism before you trust the output.

There is a broader point here about how analytics tools fit into a measurement strategy. A pipeline review is one application of a wider analytical capability. If you want to build that capability properly, the Marketing Analytics section of The Marketing Juice covers the full stack, from tool selection through to measurement frameworks and attribution.

How Often Should You Run a Pipeline Review?

The honest answer is: more often than most teams do, and with less ceremony than most teams apply to it.

Monthly is the minimum for most businesses. Fortnightly is better if your sales cycle is short and your pipeline volume is high. Weekly is only useful if you have the data infrastructure to make it meaningful, and most organisations do not.

The ceremony is the enemy. The more elaborate the deck, the longer the meeting, the more the review becomes a performance rather than a decision-making process. The best pipeline reviews I have run were forty-five minutes with a single shared dashboard and a clear agenda: what has changed, why has it changed, what are we going to do about it.

That discipline is harder to maintain than it sounds. There is always pressure to add more metrics, more context, more slides. Resist it. The value of a pipeline review is proportional to the clarity of the decisions it produces, not the comprehensiveness of the data it presents.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a pipeline review using analytics?
A pipeline review using analytics is a structured process of examining your marketing and sales data at each stage of the funnel to identify where leads are progressing, where they are stalling, and what the data suggests you should change. It replaces anecdotal deal-by-deal discussion with a diagnostic approach based on conversion rates, velocity, and source segmentation.
Which metrics matter most in a pipeline review?
The three categories that matter most are volume, velocity, and conversion rate by stage. Volume tells you how much is in the pipeline. Velocity tells you how fast it is moving. Stage-by-stage conversion rates tell you precisely where the pipeline is breaking down. All three together give you a diagnostic picture. Any one of them in isolation is misleading.
How do you connect GA4 data to a pipeline review?
GA4 covers the pre-CRM portion of the experience, from first visit through to lead capture. The funnel exploration report in GA4 lets you define each step and see drop-off rates. Once a lead enters your CRM, the pipeline data lives there. Connecting the two cleanly requires consistent UTM tracking and, ideally, a shared identifier or integration between your analytics platform and your CRM. The gap between these two systems is where most pipeline analytics break down.
How often should a pipeline review be conducted?
Monthly is the minimum for most businesses. Fortnightly works better when sales cycles are short and pipeline volume is high. Weekly reviews are only useful if your data infrastructure can support them with meaningful updates. The frequency matters less than the consistency and the discipline of producing actual decisions at the end of each review, rather than a list of things to monitor.
Why does segmenting pipeline by source matter?
Because aggregate conversion rates hide channel-specific behaviour. Inbound organic leads typically convert at a materially higher rate than paid leads, and outbound leads often behave differently again. If you review your pipeline using blended figures, you are making decisions based on averages that do not reflect the reality of any individual channel. Segmenting by source tells you which channels are producing pipeline that actually closes and which are producing volume that flatters your top-of-funnel numbers without contributing to revenue.

Similar Posts