Customer Journey Analysis: What the Map Won’t Tell You
Customer experience analysis is the process of examining how real customers move through every interaction with your brand, from first awareness to purchase and beyond, using behavioural data to identify where experience breaks down and where it compounds value. Done well, it replaces assumption with evidence and gives you a clear line of sight between customer behaviour and commercial outcomes.
Most companies have a experience map on a wall somewhere. Far fewer have an analysis practice that tells them what is actually happening, why customers drop off, and what it costs them in revenue terms when they do.
Key Takeaways
- experience maps are hypotheses. experience analysis is the evidence that confirms or contradicts them. Treating the map as the destination is where most CX programmes stall.
- The most commercially damaging friction points are rarely the obvious ones. They tend to sit in the middle of the experience, where customers are already invested but not yet converted.
- Qualitative and quantitative data answer different questions. Drop-off rates tell you where something is wrong. Exit surveys and session recordings tell you why.
- Customer experience analysis only creates value when it connects to a decision. If the output is a report that no one acts on, the analysis was a cost, not an investment.
- The brands that grow consistently are usually the ones that have made the experience so good that marketing becomes less necessary. Analysis is how you find those compounding moments.
In This Article
- Why Most experience Maps Collect Dust
- What Customer experience Analysis Actually Involves
- The Data Sources That Actually Matter
- How to Structure the Analysis Without Getting Lost in the Data
- The Segments That Change Everything
- Where AI Fits Into experience Analysis
- The Friction Points Companies Consistently Ignore
- Connecting experience Analysis to Commercial Outcomes
- Making experience Analysis a Practice, Not a Project
Why Most experience Maps Collect Dust
I have sat in more experience mapping workshops than I care to count. The sticky notes go up, the swimlanes get drawn, someone takes a photo for the internal newsletter, and the output gets filed somewhere between the brand guidelines and last year’s strategy deck. Six months later, nothing has changed.
The problem is not the mapping exercise itself. The problem is that most organisations treat the map as the deliverable rather than the starting point. A experience map is a hypothesis about how customers experience your brand. It is, at best, an educated guess built from internal assumptions and a handful of customer interviews. It becomes useful only when you test it against real behavioural data and find out where your hypothesis was wrong.
When I was running agencies, clients would occasionally come to us with beautifully produced experience maps. Consultants had been paid significant sums to produce them. But when we looked at the actual data, the drop-off patterns told a completely different story. The map said customers were lost at the awareness stage. The data said they were getting all the way to the product page and abandoning there, which is an entirely different problem requiring an entirely different solution.
experience analysis is what closes that gap. It is the practice of using actual customer data, behavioural, transactional, and qualitative, to understand what is really happening rather than what you think is happening.
If you are building out your broader approach to customer experience measurement and strategy, the Customer Experience hub at The Marketing Juice covers the full landscape, from KPI frameworks to retention mechanics.
What Customer experience Analysis Actually Involves
The term gets used loosely, so it is worth being precise. Customer experience analysis involves three distinct activities, and conflating them is where programmes go wrong.
The first is touchpoint mapping with data. This means identifying every interaction a customer has with your brand across channels and overlaying real behavioural data onto each one. Not what you designed the touchpoint to do, but what customers actually do when they encounter it. Where do they come from? How long do they spend? What do they do next? Where do they leave?
The second is friction identification. This is where the commercial value sits. Every experience has points where customers slow down, hesitate, or abandon entirely. Some of that friction is unavoidable. Some of it is entirely self-inflicted, a form field that is too long, a checkout process that requires account creation, a support experience that makes people feel like a burden rather than a customer. Identifying which friction points are costing you the most revenue is the core analytical task.
The third is moment identification. Not all moments in a experience are equal. Some moments have a disproportionate influence on whether a customer stays, converts, upgrades, or refers. These are sometimes called moments of truth, though I find that phrase a little theatrical. The point is that a small number of interactions tend to do most of the work in shaping how a customer feels about your brand. Finding those moments and investing in them is where analysis translates into growth.
Tools like Crazy Egg’s breakdown of customer experience mechanics are useful for understanding how behavioural data fits into this analytical process, particularly around on-site behaviour and session analysis.
The Data Sources That Actually Matter
One of the more common mistakes I see is organisations building their experience analysis on a single data source. Usually web analytics. Sometimes CRM data. Occasionally survey results. Each of these gives you a partial view. None of them gives you the full picture on its own.
Here is how I think about the data stack for a credible experience analysis:
Behavioural data tells you what customers did. Web analytics, app event tracking, heatmaps, session recordings. This is your highest-volume, most granular source of evidence about how customers actually move through your digital touchpoints. It is also the most prone to misinterpretation, because it tells you what happened but not why.
Transactional data tells you what customers bought, when, how often, and how much they spent. When you overlay this onto behavioural data, you start to see which experience patterns correlate with high-value customers and which ones correlate with one-and-done purchases. This is where the CLV connection becomes visible.
Voice of customer data tells you why. Exit surveys, post-purchase surveys, support transcripts, review data. This is the layer that gives meaning to the numbers. A 40% drop-off at checkout is a fact. The reason for that drop-off, unexpected shipping costs, a confusing promo code field, a payment method that is not available, comes from qualitative sources.
Operational data tells you what happened when things went wrong. Support ticket volumes, resolution rates, call centre data, returns and refund rates. These are often excluded from experience analysis because they sit in different systems owned by different teams. That is a mistake. The post-purchase experience is part of the experience, and operational data is how you measure it.
Mailchimp’s overview of customer experience analytics covers how these data types fit together in practice, which is worth reading if you are building out your measurement architecture.
How to Structure the Analysis Without Getting Lost in the Data
This is where most in-house teams struggle. They have access to plenty of data. What they lack is a structured approach to turning that data into a clear picture of where the experience is working and where it is not.
The framework I have used across multiple client engagements starts with four questions, applied to each stage of the experience in sequence.
1. How many customers reach this stage? This establishes your funnel volumes. You are looking for the stages where significant numbers of customers drop off relative to the stage before. Those are your priority areas for investigation.
2. What do customers do when they get here? This is the behavioural layer. What actions do they take? What do they engage with? What do they ignore? Where do they spend time and where do they move through quickly?
3. What are customers trying to accomplish at this stage? This is the intent layer, and it requires qualitative input. A customer on a product page is not just browsing. They are trying to answer specific questions before they feel confident enough to proceed. If your product page is not answering those questions, they will leave. The data tells you they left. The qualitative research tells you what question went unanswered.
4. What would need to change for more customers to progress? This is where analysis becomes action. The answer might be a content change, a UX improvement, a pricing adjustment, a channel shift, or a fundamental product issue. The important thing is that you are now working from evidence rather than assumption.
Optimizely’s work on digital optimisation across the customer experience is a useful reference for how this kind of structured analysis connects to experimentation and testing programmes.
The Segments That Change Everything
Aggregate experience analysis is a starting point. The real insight comes when you segment.
I worked with a retail client several years ago where the overall conversion rate looked acceptable. Nothing alarming. But when we segmented the experience by acquisition channel, the picture changed dramatically. Customers coming from paid search converted at a reasonable rate and had decent retention. Customers coming from a particular affiliate channel converted at a similar rate initially, but their return rate was three times higher and their lifetime value was a fraction of the paid search cohort. The aggregate data was masking a significant problem.
The segments worth analysing vary by business, but the ones I return to most often are:
Acquisition channel segments. Customers from different channels often have fundamentally different needs, expectations, and behaviours. A customer who found you through a brand search already has some intent and familiarity. A customer who clicked a display ad may not. Their journeys should be analysed separately.
First purchase versus repeat purchase segments. The experience for a new customer is not the same as the experience for someone who has bought from you before. Treating them the same in your analysis, and in your experience design, is a common and costly error.
High-value versus low-value customer segments. If you can identify what experience patterns correlate with your highest-value customers, you can work backwards to understand what you should be doing more of and what you should stop doing. This is one of the most commercially useful outputs of experience analysis.
Churned customer segments. Analysing the journeys of customers who left, specifically what their behaviour looked like in the weeks before they churned, often reveals early warning signals that are invisible in aggregate data. Those signals can become the basis of an early intervention programme.
HubSpot’s piece on customer experience personalisation covers how segmentation connects to personalised experience delivery, which is the logical next step once your analysis has identified meaningful behavioural differences between groups.
Where AI Fits Into experience Analysis
There is a version of this conversation that turns into an AI hype piece. I am not going to write that version.
The honest position is that AI is genuinely useful for experience analysis in specific ways, and not particularly useful in others. The useful applications are mostly about scale and pattern recognition. AI tools can process larger volumes of behavioural data than human analysts can, identify non-obvious correlations between experience patterns and outcomes, and flag anomalies in real time that would otherwise take weeks to surface in a monthly report.
Where AI is less useful is in the interpretive and strategic layers. Understanding why customers behave a certain way, what it means for your product or experience design, and what trade-offs to make when fixing it, those are still fundamentally human judgement calls. The analysis tells you what is happening. The strategy tells you what to do about it. AI can accelerate the former. It cannot replace the latter.
Moz’s Whiteboard Friday on using ChatGPT for customer experience mapping is a grounded look at where AI tools add genuine value in this process and where the limits are. Worth watching if you are evaluating how to incorporate AI into your analysis workflow.
HubSpot also has a useful perspective on how AI can improve customer experience more broadly, which provides useful context for where experience analysis fits into a larger AI-enabled CX programme.
The Friction Points Companies Consistently Ignore
After two decades of looking at customer journeys across thirty-odd industries, there are patterns in where companies allow friction to persist. Not because they do not care, but because the friction sits in parts of the business that are harder to change, or because it is owned by a team that is not connected to the customer experience conversation.
The post-purchase experience is the most consistently neglected. Companies invest heavily in the acquisition experience and the conversion moment, then largely abandon the customer once the transaction is complete. The confirmation email is automated. The delivery update is automated. The first customer service interaction, if there is one, is often the first time a human being is involved. For many customers, that first interaction after purchase is the moment that determines whether they come back. And it is frequently the worst experience in the entire experience.
I ran a client engagement in financial services where the product was genuinely competitive and the acquisition marketing was well-executed. But the onboarding experience after sign-up was a disaster. Customers were handed off to an operations team that communicated in compliance language, sent documents that required printing and posting, and provided no clear timeline for when the account would be active. The drop-off rate in that post-sign-up window was significant, and nobody had connected it to the acquisition investment being wasted. The marketing team thought their job was done at sign-up. It was not.
The other consistently ignored friction point is the support experience. When customers contact support, they are already in a moment of stress or frustration. How that interaction goes has an outsized effect on their perception of the brand. Yet support data is rarely integrated into experience analysis. It sits in a separate system, owned by a separate team, measured on separate metrics. The result is that companies optimise the front end of the experience while allowing the back end to quietly destroy the loyalty they worked hard to build.
Connecting experience Analysis to Commercial Outcomes
This is the part that separates useful analysis from interesting analysis.
Every friction point you identify should be translated into a commercial cost. If 15% of customers abandon at checkout and your average order value is £80, the revenue implication of reducing that abandonment rate by 5 percentage points is a concrete number. That number is what gets the investment approved, the development resource prioritised, and the leadership team engaged.
When I was growing an agency from a small team to over 100 people, one of the things I learned early was that any analysis that did not connect to a financial outcome would not get traction with clients. Not because clients did not care about experience, but because they were running businesses with competing priorities and finite budgets. The analysis had to answer the question: what is this costing us, and what would fixing it be worth? experience analysis that cannot answer those questions is a cost centre. experience analysis that can answer them is a growth function.
The mechanics of building that commercial connection are not complicated. You need three things: a volume figure (how many customers are affected), a value figure (what those customers are worth), and a conversion figure (what percentage improvement is realistic). With those three numbers, you can build a business case for almost any experience improvement.
SMS and direct communication channels are increasingly relevant in this context. Mailchimp’s work on SMS customer engagement is worth reviewing if you are thinking about how to intervene at specific experience moments where customers are at risk of dropping off.
Making experience Analysis a Practice, Not a Project
The biggest structural problem with experience analysis in most organisations is that it happens once. A project is commissioned, data is gathered, a report is produced, recommendations are made, some of them are implemented, and then the organisation moves on. Eighteen months later, someone commissions another experience analysis and discovers that the same friction points still exist, or that new ones have emerged.
The companies that get the most value from experience analysis treat it as an ongoing practice rather than a periodic project. That means having a regular cadence of data review, a clear ownership model for each stage of the experience, and a process for turning findings into prioritised actions.
It also means resisting the temptation to boil the ocean. One of the most valuable things you can do in a experience analysis programme is be ruthless about prioritisation. Not every friction point is worth fixing. Some are low-volume and low-impact. Others are high-volume but structurally difficult to change. The ones worth focusing on are high-volume, high-impact, and actionable within a reasonable timeframe. Finding those and fixing them, consistently, over time, is what compounds into a meaningfully better customer experience.
There is a broader truth underneath all of this. Companies that genuinely invest in understanding and improving the customer experience tend to need less aggressive marketing over time. Not because marketing stops mattering, but because a good experience does some of the work that marketing would otherwise have to do. Retention improves, referrals increase, and the cost of acquiring new customers relative to the value they generate starts to look more favourable. experience analysis is how you find the levers that make that happen.
For a broader view of how experience analysis connects to the full customer experience discipline, including the KPI frameworks and retention mechanics that sit alongside it, the Customer Experience hub is where I have pulled together the complete picture.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
