Customer Journeys Are Broken Before Marketing Touches Them
A customer experience maps the path a person takes from first awareness of your brand through to purchase and beyond. Done well, it is one of the most commercially useful tools in marketing. Done poorly, it becomes a wall chart that looks impressive in a strategy presentation and collects dust for the next three years.
The honest version of most customer experience work is this: companies map the experience they wish customers took, not the one customers actually take. That gap is where growth stalls, churn accelerates, and marketing spend quietly disappears into a void.
Key Takeaways
- Most customer experience maps reflect internal assumptions rather than observed customer behaviour, which makes them decorative rather than operational.
- The biggest friction points in a customer experience are rarely in the channels marketing controls, which is why fixing the map without fixing the experience changes nothing.
- experience mapping only creates commercial value when it is tied directly to specific decisions: what to change, what to stop, and where to invest.
- Omnichannel consistency matters more than omnichannel presence. Being everywhere badly is worse than being in fewer places well.
- Marketing is often deployed to compensate for experience failures that sit outside marketing’s remit. Recognising that distinction is the first step to spending more effectively.
In This Article
- Why Most Customer experience Maps Are Fiction
- What a Customer experience Actually Includes
- The Difference Between Mapping and Fixing
- Where the Real Friction Lives
- How to Build a experience Map That Is Actually Useful
- The Role of Content Across the experience
- When the experience Is Broken Before Marketing Starts
- Measurement: What to Track and What to Ignore
- The Organisational Problem experience Mapping Exposes
Why Most Customer experience Maps Are Fiction
I have sat in more experience mapping workshops than I care to count. The format is familiar: a long table, sticky notes in four colours, a facilitator with a marker pen, and a group of people who represent marketing, sales, product, and customer service. The output is usually a horizontal flow diagram with five or six stages and a set of emotions attributed to the customer at each one.
The problem is that nobody in that room is the customer. What gets mapped is a consensus view of how internal teams believe customers behave. It is built from partial data, departmental assumptions, and the loudest voice in the room. The result is a experience that reflects the organisation’s structure more than it reflects customer reality.
Early in my career I worked on an account where the client had spent considerable time and money producing a beautifully designed customer experience map. Six stages, colour-coded touchpoints, emotion curves across the top. It was genuinely impressive to look at. When we went out and actually spoke to customers, we found that a significant portion of them were abandoning the process at a stage the internal team had categorised as “smooth and frictionless.” The friction was real. It just happened to sit in a part of the business that no one in the workshop had represented.
That experience has shaped how I approach experience work ever since. The map is only as good as the data that built it, and the data is only as good as the questions you asked and who you asked them to.
What a Customer experience Actually Includes
The standard model breaks a customer experience into awareness, consideration, decision, purchase, and retention. It is a useful scaffold. It is not the whole picture.
What the standard model tends to underweight is everything that happens between touchpoints. The customer who sees your ad, visits your site, gets distracted, comes back three days later via a different device, reads two reviews on a third-party platform, calls your customer service line with a question, gets a mediocre answer, and then decides to buy from a competitor anyway. That is a experience. None of it shows up cleanly in a five-box diagram.
The omnichannel customer experience is where most experience models fall apart in practice. Customers do not experience your brand as a set of discrete channels. They experience it as one thing, and they expect it to be coherent. When it is not, the friction is attributed to the brand, not to the internal team that failed to coordinate.
A customer who has a poor experience in-store does not separate that from their online experience. A customer who receives inconsistent pricing across channels does not give credit for the channels that got it right. The experience is comprehensive in the customer’s mind even when it is siloed in yours.
This is the broader territory that the customer experience hub covers in depth, from the structural problems that create poor journeys through to the operational changes that fix them. experience mapping is one tool within that larger discipline, and it works best when it is treated as an input to operational decisions rather than an output of a strategy process.
The Difference Between Mapping and Fixing
There is a version of customer experience work that is essentially a diagnostic exercise. You map the current state, identify the friction points, and produce a set of recommendations. That version has value. It is not, however, the same as fixing anything.
The gap between mapping and fixing is where most experience initiatives stall. The map gets produced, the friction points get identified, and then the recommendations land in a backlog somewhere between “Q3 priorities” and “nice to have.” Six months later, the experience is unchanged and the map is out of date.
When I was running agencies, one of the things I pushed hard on was connecting experience work directly to commercial decisions. Not “here are the friction points” but “here is what it is costing you, here is what fixing it would be worth, and here is the specific change required.” That framing changes the conversation. It moves experience mapping from a marketing exercise to a business case.
The tools exist to support that kind of thinking. A well-configured customer experience dashboard can connect experience data to commercial outcomes in a way that makes the business case visible. The challenge is rarely the tooling. It is the willingness to tie customer experience metrics to revenue and cost in a way that creates genuine accountability.
Where the Real Friction Lives
Marketing teams tend to focus experience improvement efforts on the parts of the experience they control: the ad creative, the landing page, the email sequence, the onboarding flow. These are the visible, measurable, optimisable parts of the experience. They are also, in my experience, rarely where the most significant friction lives.
The deepest friction points tend to sit in customer service response times, in product or service quality, in pricing complexity, in billing processes, and in the gap between what was promised in marketing and what was delivered in reality. None of those are primarily marketing problems. But marketing tends to feel the consequences of them in the form of poor conversion rates, high churn, and a customer acquisition cost that never comes down no matter how much you optimise the funnel.
I have seen this play out repeatedly across different categories. A financial services client was spending heavily on acquisition because their retention rate was poor. The retention problem was not a marketing problem. It was a product problem: customers were buying something that did not do what they expected it to do, based on marketing that had overstated the benefits. More marketing spend was making the problem worse, not better. The experience map made this visible. The fix required a product change and a recalibration of the marketing message, not another round of funnel optimisation.
This is the uncomfortable truth about customer experience work: it often reveals problems that marketing cannot solve. The organisations that benefit most from it are the ones willing to act on that finding rather than scope it out of the project.
How to Build a experience Map That Is Actually Useful
The methodology matters less than the inputs. A experience map built on genuine customer data, observed behaviour, and honest internal assessment will outperform a beautifully designed one built on assumptions. That said, there are a few principles that consistently separate useful experience maps from decorative ones.
Start with a specific customer segment, not “the customer.” The customer does not exist. Different segments take different paths, have different motivations, and encounter different friction points. A map that tries to represent everyone ends up accurately representing no one. Pick the segment that matters most commercially and map their experience specifically.
Use observed behaviour, not stated preference. What customers say they do and what they actually do are frequently different. Session recordings, support ticket analysis, call centre transcripts, and churn interview data are more reliable inputs than surveys asking customers to describe their own behaviour.
Map the current state before the ideal state. Many experience mapping exercises skip straight to “what should the experience look like.” That is a useful exercise, but it is not the same as understanding what the experience currently looks like. The current state map is where the commercial opportunity lives.
Assign ownership to every friction point. A friction point without an owner is just an observation. For every identified friction point, there should be a named team or individual responsible for addressing it and a timeline for doing so. Without that, the map is documentation, not a plan.
Revisit it. A experience map is not a one-time deliverable. Customer behaviour changes, product features change, competitive context changes. A map that was accurate eighteen months ago may be actively misleading today. Build a review cadence into the process from the start.
There are increasingly capable tools to support this kind of structured thinking. Using AI to pressure-test experience assumptions is one area worth exploring, not as a replacement for customer research, but as a way of stress-testing the internal logic of a experience model before you take it into the field.
The Role of Content Across the experience
One of the practical applications of experience mapping that gets underused is content planning. When you know where customers are in their decision process, what questions they are asking, and what friction they are encountering, you have a clear brief for what content needs to exist.
Most content strategies are built around what the brand wants to say rather than what the customer needs to hear at a specific point in their decision process. experience-led content planning inverts that. It starts with the question: at this stage, what does this customer need to know, believe, or feel in order to move forward?
That question produces very different content briefs than a brainstorm session around brand themes. It produces content that is specific, useful, and timed to the moment when it will have the most impact. Writing with genuine authority at each stage of the experience is what separates content that moves people from content that simply exists.
I have seen this work well in B2B contexts particularly. A client in professional services had a long consideration cycle, typically six to twelve months from first contact to contract. The content they had was almost entirely top-of-funnel awareness material. There was almost nothing designed to support the evaluation stage, where prospects were comparing options, managing internal stakeholders, and building a business case. Mapping the experience made that gap visible. Filling it with the right content shortened the consideration cycle measurably.
When the experience Is Broken Before Marketing Starts
There is a version of this conversation that marketing teams find uncomfortable, which is the version where the experience analysis reveals that the core product or service is the problem. Not the messaging. Not the funnel. Not the channel mix. The thing itself.
I have spent a good part of my career working with businesses that were using marketing to paper over fundamental product or service problems. More spend, more channels, more creative iterations, all in service of acquiring customers who would then churn at a rate that made the whole exercise economically unsustainable. The experience map, if anyone had been willing to read it honestly, would have shown the same thing every time: customers were leaving because the product did not deliver what the marketing promised.
This is the broader argument I keep coming back to. If a company genuinely delivered on its promise at every stage of the customer experience, marketing’s job becomes significantly easier and significantly more effective. You are amplifying something real rather than compensating for something broken. The experience map is often the instrument that makes the diagnosis possible. Whether the organisation is willing to act on it is a different question entirely.
Understanding where experience failures originate, and which ones are genuinely within marketing’s power to address, is one of the more valuable things a senior marketer can do. It is also one of the things that requires the most political capital, because the answer frequently implicates teams outside marketing.
Measurement: What to Track and What to Ignore
experience mapping generates a long list of potential metrics. Conversion rates at each stage, time in stage, drop-off rates, satisfaction scores, Net Promoter Score, Customer Effort Score. The list is long enough to become its own distraction.
The metrics that matter are the ones connected to the specific friction points you have identified and the commercial outcomes you are trying to improve. If you have identified that the consideration-to-purchase conversion rate is the primary lever, that is what you measure. If churn in the first ninety days is the problem, that is where the measurement focus goes.
Tracking everything is not the same as understanding anything. I have seen marketing teams produce forty-slide experience analytics decks every month that nobody acted on, because the volume of data had no connection to a specific decision. The discipline is in selecting the three or four metrics that tell you whether the experience is improving, and ignoring the rest until they become relevant.
AI tools are increasingly useful for processing experience data at scale and surfacing patterns that would be difficult to identify manually. AI-assisted customer experience analysis can accelerate the diagnostic phase of experience work considerably. The caveat, as always, is that the analysis still requires human judgment about what matters and what to do about it.
Chatbots and automated touchpoints generate substantial experience data as a byproduct of their operation. Customer service chatbot interactions in particular can reveal friction points that customers never articulate directly, because the questions they ask and the points at which they abandon a conversation are a direct signal of where the experience is failing them.
The Organisational Problem experience Mapping Exposes
One of the things I find most useful about serious experience mapping work is what it reveals about organisational structure. Most customer journeys cross multiple internal teams. Marketing owns awareness. Sales owns conversion. Product owns the usage experience. Customer service owns the support interactions. Finance owns billing. Each team optimises for its own metrics and its own part of the process.
The customer experiences none of those boundaries. They experience one thing. When the handoffs between teams are poorly managed, the customer feels it as inconsistency, confusion, or friction, without any visibility into which team is responsible.
experience mapping makes those handoff failures visible in a way that internal reporting rarely does, because internal reporting is structured around team performance rather than customer experience. A team can be hitting all of its internal metrics while contributing to a experience that is failing the customer at the moments that matter most.
When I grew the team at iProspect from around twenty people to over a hundred, one of the consistent challenges was ensuring that client experience remained coherent as the organisation became more specialised. More specialists meant more potential for the client to experience inconsistency. The experience mapping discipline, applied internally to how clients experienced working with us, was genuinely useful in identifying where the coordination failures were happening and which ones were costing us commercially.
For more on how the structural and operational dimensions of customer experience interact, the customer experience section of The Marketing Juice covers the full landscape, from measurement frameworks through to the organisational conditions that make good CX possible in practice.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
