Your Marketing Funnel Is Lying to You

Most marketing funnels don’t describe how customers actually behave. They describe how marketers wish customers would behave: a clean, linear progression from awareness to consideration to purchase, with a conversion rate at each stage and a tidy cost-per-acquisition at the bottom. The reality is messier, more circular, and far more interesting than the diagram suggests.

The funnel isn’t wrong as a mental model. It becomes a problem when teams treat it as a measurement system rather than a thinking tool, and start optimising the model instead of the actual customer experience.

Key Takeaways

  • The marketing funnel is a useful thinking tool but a poor measurement system. Treating it as the latter leads to misallocated budgets and false confidence in lower-funnel performance.
  • Much of what lower-funnel activity gets credited for was going to happen anyway. Capturing existing intent is not the same as creating new demand.
  • Funnel stages break down in practice because customers move between them non-linearly, often multiple times before converting.
  • The most valuable CRO work happens when you understand where customers actually lose confidence, not just where they drop off in a session recording.
  • A funnel that looks healthy on paper can still be starving at the top. Volume problems disguised as conversion problems are common and expensive to misdiagnose.

I spent the early part of my career deeply in love with lower-funnel performance metrics. Click-through rates, cost-per-click, conversion rates, return on ad spend. The numbers were clean, the attribution was (apparently) clear, and the story they told was compelling. It took me longer than I’d like to admit to question whether the story was actually true.

Why the Funnel Model Flatters Lower-Funnel Activity

Here’s the structural problem with how most organisations use the funnel. Performance is measured at the bottom, budget decisions are made based on that performance, and the top of the funnel gets treated as a cost centre rather than a growth driver. The result is a slow, invisible starvation of the pipeline.

When I was running iProspect, we grew the agency from around 20 people to over 100 and moved from loss-making to a top-five position in the market. A significant part of that came from challenging the attribution logic that our clients, and frankly our own team, had accepted as gospel. Lower-funnel paid search was consistently taking credit for conversions that would have happened regardless. The customer had already made their decision. They were just using Google to find the checkout.

Think about it like a clothes shop. Someone who picks up a garment and tries it on is far more likely to buy it than someone who walks past the rail. But if the shop only measured sales at the till, it would look like the till was doing all the work. The fitting room, the visual merchandising, the window display, the location of the store: all invisible in the final transaction data. That’s exactly what happens when you over-index on last-click or last-touch attribution at the bottom of a digital funnel.

The TOFU, MOFU, BOFU framework is a reasonable way to think about this structurally. Where it goes wrong is when teams use it to justify pouring budget into BOFU because the numbers look better, while the TOFU activity that feeds the whole system quietly atrophies.

The Funnel Stages That Actually Matter (and the Ones That Don’t)

Not all funnel stages are created equal, and the ones that matter most vary significantly by category, purchase cycle, and customer type. A SaaS business with a 60-day free trial has a completely different funnel problem than an ecommerce brand selling £30 skincare products. Applying the same framework to both and expecting the same optimisation levers to work is one of the most common and expensive mistakes I see.

For high-consideration purchases, the middle of the funnel is where most of the real work happens. This is where customers are comparing options, reading reviews, watching demos, and deciding whether they trust you enough to proceed. Most CRO programmes spend disproportionate energy on the checkout flow while the consideration stage, where the actual decision is made, gets minimal attention.

For lower-consideration purchases, the top of the funnel matters more than most brands realise. If you’re not reaching new audiences, you’re not growing. You’re just recycling the same pool of existing intent, and that pool has a ceiling. I’ve sat in budget reviews where a client was celebrating a 15% improvement in conversion rate while their total addressable audience had shrunk by 30% because brand investment had been cut to fund the performance activity that was producing the “better” numbers. The funnel looked great. The business was declining.

If you’re working through a conversion optimisation programme and want to understand how funnel analysis fits into the broader picture, the CRO and Testing hub covers the full landscape, from audit frameworks to testing methodology to commercial measurement.

Where Customers Actually Drop Off (Versus Where Your Analytics Say They Do)

Session recordings and funnel reports in your analytics platform show you where people leave. They don’t tell you why. And the gap between those two things is where most CRO programmes get stuck.

A drop-off at the product page might look like a product page problem. But if the customer arrived from a paid ad that promised something the product page didn’t deliver, the problem is upstream. The product page is just where the disappointment became visible. Fix the product page and you’ll see marginal improvement. Fix the message match between the ad and the landing experience and you’ll see something meaningful.

Heatmap tools are useful for understanding where attention goes on a page and what people interact with. Setting up heatmaps correctly is a reasonable starting point for any funnel analysis, but the data they produce is a signal, not a diagnosis. I’ve reviewed heatmap reports that confidently concluded a CTA button needed to move, when the real issue was that nobody trusted the offer enough to click anything, regardless of where the button sat.

The more useful question is: where does customer confidence break down? Not where do they click or not click, but at what point in the experience does doubt overtake intent? That’s a harder question to answer with quantitative data alone, and it’s why qualitative research, exit surveys, customer interviews, and session replay analysis need to work together rather than in isolation.

Hotjar’s guidance on funnel optimisation is worth reading for the methodological side of this. The principle of combining behavioural data with direct customer feedback is sound, even if the specific tools you use will vary.

The Volume Problem Disguised as a Conversion Problem

One of the most reliable ways to waste six months of optimisation effort is to misdiagnose a volume problem as a conversion problem. It happens constantly, and it happens because conversion rate is easier to measure than audience reach or brand consideration.

If your funnel is converting at 2% and you want to grow, there are two paths. Improve the conversion rate to 3% or 4%, which is genuinely difficult and often yields diminishing returns after the obvious fixes are made. Or bring more of the right people into the top of the funnel, which is harder to attribute but often more impactful at scale.

I judged the Effie Awards for several years, and the entries that consistently impressed me were the ones that demonstrated growth through genuine audience expansion, not just efficiency gains on existing demand. The campaigns that won were usually the ones that had done something to change who knew about the brand or how they felt about it, not just the ones that had shaved points off a cost-per-acquisition.

That doesn’t mean conversion optimisation isn’t valuable. It absolutely is. But it needs to be applied to the right problem. A leaky bucket needs fixing. But if the tap has been turned down, fixing the bucket won’t solve your water shortage.

For ecommerce businesses specifically, Mailchimp’s overview of ecommerce CRO does a reasonable job of framing the relationship between traffic quality and conversion performance. The point about traffic source mattering as much as on-site experience is one that gets underemphasised in most CRO conversations.

Multi-Touch Reality: How Customers Actually Move Through Funnels

The linear funnel model, awareness then consideration then conversion, describes a customer experience that almost nobody actually takes. Real customers circle back. They research, get distracted, return weeks later, compare alternatives, abandon carts, come back through a different channel, and finally convert in a way that your attribution model will credit to whichever touchpoint fired last.

I managed campaigns across 30 different industries over my career, and the one consistent finding was that the higher the purchase value, the more non-linear the experience. A customer buying a B2B software contract worth £200,000 per year doesn’t move through a funnel. They orbit a decision for months, involving multiple stakeholders, consuming content at irregular intervals, attending webinars, reading case studies, and having internal conversations that you have no visibility into at all.

For B2B specifically, the funnel model needs to be replaced, or at least supplemented, with something that accounts for buying committees, extended evaluation periods, and the fact that the person who fills in the contact form is rarely the person who makes the final decision. Optimising the contact form is fine. Thinking that’s where the conversion happens is a category error.

Content plays a significant role in the middle stages of these longer journeys, and Moz’s analysis of how organic content maps to the conversion funnel is useful context for understanding how search behaviour changes at different stages of consideration. The implication for CRO is that optimising for conversion at the top of the funnel, where intent is exploratory, is a different problem than optimising at the bottom, where intent is transactional.

What Good Funnel Analysis Actually Looks Like

Good funnel analysis starts with a question, not a dashboard. The question should be commercial: where is revenue being lost, and what’s the most likely cause? Not: what does our funnel report show, and what can we optimise?

The sequence I’ve found most useful over the years is this. Start with the commercial outcome you’re trying to improve. Work backwards to identify which stage of the funnel is most likely constraining that outcome. Then gather data, both quantitative and qualitative, to form a hypothesis about why that constraint exists. Then test.

That sounds obvious. It isn’t how most teams operate. Most teams start with the data they have, find the most visible drop-off point, and optimise it. The problem is that the most visible drop-off isn’t always the most important one. And the most important constraint is often upstream of where the data makes it easy to look.

Filtering your behavioural data properly matters here. Segmenting heatmap and session data by traffic source, device type, and user segment often reveals that what looks like a single funnel is actually several different funnels with very different dynamics. A mobile user arriving from a social ad has a different relationship with your product page than a desktop user arriving from branded search. Treating them as one population and optimising for the average is a reliable way to improve nothing for anyone.

For ecommerce teams thinking about how behavioural data feeds into broader CRO strategy, Hotjar’s ecommerce CRO resource covers the segmentation and analysis side in useful detail.

The Attribution Trap at the Bottom of Every Funnel

Every funnel analysis eventually runs into the attribution problem. The conversion happened. Multiple touchpoints contributed. Your measurement system will assign credit to one of them, and the one it assigns credit to will get more budget, and the ones it doesn’t will get cut. This is how good marketing gets defunded.

I’ve watched this play out in organisations large and small. A brand runs a significant TV campaign. Branded search volume increases. Paid search captures that increased branded intent and records a sharp improvement in conversion rate and return on ad spend. The TV budget gets cut because it’s hard to measure. Branded search volume falls. Paid search performance deteriorates. The team concludes that “the market has changed” and looks for new tactics.

The funnel didn’t fail. The measurement model failed. And the measurement model failed because it was designed to credit the last touchpoint rather than understand the full system.

This doesn’t mean attribution is unsolvable. It means the solution requires intellectual honesty about what your data can and cannot tell you. Marketing mix modelling, incrementality testing, and brand tracking surveys are all imperfect, but they’re more honest approximations of reality than last-click attribution dressed up as precision.

The broader principles of conversion optimisation, including how to build measurement frameworks that don’t just flatter the metrics you already have, are covered in more depth across the CRO and Testing section of The Marketing Juice. If you’re building or reviewing a CRO programme, it’s worth reading the measurement and commercial impact pieces alongside the tactical ones.

What the Funnel Gets Right (and Why You Shouldn’t Abandon It)

After all of the above, it would be easy to conclude that the funnel model is useless and should be replaced with something more sophisticated. That’s not the argument. The funnel is a genuinely useful thinking tool. It creates a shared language for discussing customer progression, it helps teams identify where to focus attention, and it provides a framework for connecting marketing activity to commercial outcomes.

The problem isn’t the model. The problem is the misuse of the model. Specifically: treating it as a measurement system with precise attribution, assuming customers move through it linearly, and optimising the bottom while neglecting the top.

Used well, funnel thinking prompts the right questions. Are we reaching enough of the right people? Are we giving them enough reason to consider us seriously? Are we removing the friction and doubt that stops them from acting? Those are good questions. The funnel helps you ask them in sequence and with commercial logic.

What it can’t do is tell you the answers. That requires data, judgement, customer understanding, and a willingness to question what the numbers appear to be saying. The funnel is a map. The territory is always more complicated.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

Why does lower-funnel marketing often overstate its contribution to revenue?
Lower-funnel channels like branded paid search tend to capture customers who had already decided to buy. Standard attribution models credit the final click, which makes these channels look highly efficient. The issue is that much of this revenue would have arrived regardless. Without incrementality testing or marketing mix modelling, it’s difficult to separate genuine conversion lift from demand that was already there.
What is the difference between a volume problem and a conversion problem in a marketing funnel?
A conversion problem means the right people are entering the funnel but not completing it, usually due to friction, lack of trust, or poor message match. A volume problem means not enough of the right people are entering the funnel at all. The two require completely different responses. CRO addresses conversion problems. Brand and awareness investment addresses volume problems. Misdiagnosing one as the other is one of the most common and costly errors in marketing planning.
How should you analyse a marketing funnel when customers don’t follow a linear path?
Start by mapping the actual touchpoints customers use across the full purchase cycle, not just the ones your analytics platform tracks easily. Segment your funnel analysis by traffic source, device, and customer type, since different audiences behave differently within the same funnel. Use qualitative research alongside quantitative data to understand where confidence breaks down, not just where sessions end. For high-consideration purchases, expect non-linear journeys and build your analysis around decision stages rather than click sequences.
What is the most common mistake when using heatmaps and session recordings for funnel analysis?
Treating behavioural data as a diagnosis rather than a signal. Heatmaps show you where attention goes and where people stop interacting. They don’t explain why. A drop-off at a specific page element might reflect a UX problem on that page, or it might reflect a broken expectation set earlier in the experience. Acting on heatmap data without forming a hypothesis about the underlying cause often leads to cosmetic changes that improve nothing meaningful.
How do you know if your marketing funnel is healthy at the top as well as the bottom?
Track metrics that reflect audience reach and brand consideration, not just conversion performance. Brand search volume, direct traffic trends, aided and unaided brand awareness in your target market, and new customer acquisition rates are all indicators of top-funnel health. If conversion rates are holding steady but total revenue is flat or declining, the most likely explanation is a shrinking audience entering the funnel, not a conversion problem on the site.

Similar Posts