The Customer Journey Is Not What Most Marketers Think It Is
The customer experience describes every interaction a person has with your brand, from the moment they first become aware of you through to purchase, repeat business, and advocacy. It is not a funnel. It is not a straight line. And it is almost certainly more complicated, more emotional, and more commercially significant than most organisations treat it.
Understanding it properly, and doing something useful with that understanding, is one of the highest-value things a marketing team can do. Getting it wrong is expensive in ways that rarely show up cleanly on a dashboard.
Key Takeaways
- The customer experience is not a funnel or a linear sequence. It is a web of touchpoints, emotions, and decisions that rarely follows the path marketers design for it.
- Most experience mapping exercises produce wall art, not commercial insight. The ones that drive change are built around real customer behaviour, not internal assumptions.
- Friction compounds. A small irritation at one touchpoint does not stay contained. It colours how customers interpret every interaction that follows.
- Marketing cannot fix a broken customer experience. Spend on acquisition while the post-purchase experience is poor is simply accelerating churn.
- The most commercially valuable thing you can do with a experience map is identify where customers are quietly leaving, and why.
In This Article
- What Does the Customer experience Actually Mean?
- Why experience Mapping Produces So Much Useless Output
- The Stages That Actually Matter (and What Goes Wrong at Each One)
- The Role of Omnichannel in the Modern experience
- How to Measure the experience Without Fooling Yourself
- Technology’s Role in the experience: What It Can and Cannot Do
- The Friction Problem: Why Small Irritations Have Large Consequences
- Building a experience That Actually Improves Over Time
This article is part of the Customer Experience Hub, where I cover the full range of disciplines that sit between a customer and a brand, from how you capture feedback to how you build systems that make loyalty the natural outcome of good operations.
What Does the Customer experience Actually Mean?
The phrase gets used constantly and precisely never. I have sat in planning sessions where “customer experience” meant the email sequence after sign-up, the paid media funnel, the UX flow through a website, and the entire lifetime relationship with a brand, all in the same meeting, by different people. Nobody noticed the disconnect.
So let me be specific. The customer experience is the sum of all experiences a customer has with your brand across time and across channels. It includes the things you control directly, your advertising, your website, your customer service team, and the things you influence indirectly, word of mouth, third-party reviews, how your product performs in someone’s actual life. It spans from first awareness through consideration, purchase, onboarding, ongoing use, and eventually either renewal or departure.
What it is not is a tidy diagram with five boxes and arrows pointing right. The end-to-end customer experience in practice is non-linear, often recursive, and heavily shaped by context that brands have no visibility into. Someone might see your ad, forget about you for three months, get a recommendation from a colleague, spend forty minutes on your website at midnight, abandon their basket, get a retargeting ad, and then convert because a friend mentioned you again. Which touchpoint gets credit? That is a measurement question. The more important question is: what did each of those moments feel like, and what made the person eventually say yes?
I spent a long stretch of my career managing significant paid media budgets across multiple industries, and one thing became clear early: the channel data told you what happened, not why. Two customers might take identical paths through your attribution model and have completely different experiences of your brand. One felt reassured at every step. The other felt mildly annoyed throughout and converted despite you, not because of you. The data looked identical. The commercial implications were not.
Why experience Mapping Produces So Much Useless Output
experience mapping is a legitimate and valuable exercise when done well. It is also one of the most reliably misused tools in the marketing toolkit. I have seen agencies charge significant fees to produce beautifully designed experience maps that organisations pin to walls, reference in presentations, and then quietly ignore because the maps bear no resemblance to what customers actually do.
The failure mode is almost always the same: the map is built from the inside out. A team gets into a room, draws on their understanding of the business, and produces a experience that reflects how the organisation thinks customers behave rather than how customers actually behave. It captures the intended experience, not the real one. The intended experience is almost always smoother, more rational, and more flattering to the brand than reality.
Real customers do not follow your intended path. They arrive from unexpected sources. They read your competitor’s website while they have yours open in another tab. They call your sales team and get told something different from what your website says. They buy, feel underwhelmed by onboarding, and quietly downgrade six months later. None of that shows up in a map built from assumptions.
The maps that actually drive commercial decisions are built from a combination of behavioural data, qualitative research, and honest internal audit. They include friction. They include the moments where customers leave. They include the touchpoints that the organisation does not own but that customers experience as part of the brand relationship anyway. And critically, they are built with the humility to acknowledge that the organisation does not know what it does not know.
If you want to understand what your customers actually experience, customer feedback surveys are one of the most direct ways to gather that evidence. Not as a replacement for behavioural data, but as a way to understand the emotional texture of the experience that data alone cannot capture.
The Stages That Actually Matter (and What Goes Wrong at Each One)
There are various frameworks for structuring the customer experience. AIDA has been around for over a century. McKinsey’s consumer decision experience reframed the loop. The flywheel replaced the funnel in some quarters. I am not going to advocate for any single framework, because the right structure depends on your category, your customer, and your business model. What I will do is walk through the stages that appear in some form in almost every meaningful experience, and be honest about where things typically go wrong.
Awareness
This is where the relationship begins, and where most marketing investment is concentrated. Awareness is necessary but it is not sufficient, and it is frequently treated as an end in itself rather than the start of something. I have seen brands with high awareness and poor consideration scores, which usually means the awareness is creating an impression that does not survive contact with the actual product or service. Awareness that does not convert to genuine interest is expensive noise.
The other problem with awareness-stage thinking is that it tends to be channel-centric rather than customer-centric. You optimise for reach, for impressions, for share of voice. Those are useful proxies, but they tell you nothing about whether the person who saw your ad formed any meaningful impression of your brand. Awareness without memorability is just spending.
Consideration
This is where customers are actively evaluating options, and it is where a lot of brands lose ground they do not realise they are losing. The consideration stage is not just about your website or your sales materials. It includes review platforms, comparison sites, conversations with peers, and increasingly, AI-assisted research. The way AI tools are reshaping how customers research is changing the consideration stage in ways that most brands have not fully accounted for yet.
What tends to fail at consideration is the gap between what brands say about themselves and what third parties say. If your own marketing claims and your review profile are telling different stories, customers notice. They are not naive. They weight external signals more heavily than brand-owned content, and they should.
Purchase
The transaction itself is a moment that carries disproportionate emotional weight. A difficult checkout process, a confusing pricing page, a last-minute surprise fee, or a customer service interaction that goes badly at the point of sale can undo everything that came before it. I have worked with clients who had genuinely excellent products and strong brand equity but were losing a significant proportion of conversions to avoidable friction in the purchase process. The fix was not more advertising. It was fixing the checkout.
This is also the stage where paid media strategy intersects most directly with the customer experience. How you reach customers at the bottom of the funnel, what you say to them, and what experience they land in when they click, shapes the quality of the relationship from the very first commercial moment. A well-constructed Google Ads customer service strategy is not just about cost-per-click. It is about the impression you create at the moment of highest intent.
Onboarding and Post-Purchase
This is where the gap between marketing and reality becomes most visible, and where most brands underinvest relative to the commercial value at stake. The post-purchase experience is the single biggest driver of whether a customer comes back, recommends you, or quietly disappears. And yet most marketing budgets are structured as if the relationship ends at the point of sale.
I spent time working with a business that had a serious retention problem. Customer acquisition was well-funded and performing. Churn was high and nobody could quite explain it. When we actually talked to customers who had left, the pattern was consistent: the product was fine, the onboarding was confusing, and when they had a problem in the first thirty days, the support experience made them feel like an inconvenience. They had not been let down by the product. They had been let down by the experience around the product. More advertising would not have fixed that. A better onboarding sequence and a more capable help desk operation would have.
Retention and Loyalty
Retention is not a marketing programme. It is the outcome of consistently delivering value and handling problems well. Loyalty schemes can reinforce retention, but they cannot create it from scratch. If the underlying experience is poor, a points programme is a temporary delay, not a solution.
The brands with genuinely strong retention tend to have one thing in common: they treat the ongoing relationship with the same commercial seriousness they give to acquisition. They track how customers feel at multiple points in the relationship, not just at the point of sale. They use tools like Net Promoter Score not as a vanity metric but as an early warning system. And when the score drops, they investigate rather than explain it away.
Advocacy
Advocacy is the stage most brands aspire to and few actually engineer. A customer who recommends you without being asked is the most commercially efficient marketing you can have. The problem is that advocacy cannot be manufactured through referral schemes alone. It is earned through consistently exceeding expectations at the moments that matter most to the customer, which are not always the moments that matter most to the brand.
I genuinely believe that if a company delighted customers at every reasonable opportunity, it would not need to work nearly as hard on marketing. Much of what passes for marketing strategy is a blunt instrument used to compensate for experience deficits elsewhere in the business. The companies with the strongest word of mouth tend to be the ones where the customer experience team and the marketing team are working from the same playbook, not operating in separate silos.
The Role of Omnichannel in the Modern experience
Customers do not think in channels. They think in experiences. They move between your app, your website, your physical store, your social presence, and your customer service team without any awareness of the organisational boundaries that separate those functions internally. They expect continuity. When they do not get it, they notice, even if they cannot articulate exactly what felt wrong.
The omnichannel customer experience is not a new concept, but it remains genuinely difficult to execute well. The challenge is not primarily technological. Most organisations have the tools. The challenge is organisational. Different teams own different channels, different teams own different stages of the experience, and there is rarely a single owner of the end-to-end experience. The result is a experience that feels coherent in the planning deck and fragmented in reality.
I worked with a retail client that had invested heavily in its digital experience while its in-store experience had been allowed to deteriorate. The digital and physical teams barely communicated. Customers who had a great online experience and then visited a store came away confused. The brand felt inconsistent. The fix required organisational change, not a new campaign. Aligning the two experiences took months and involved conversations that had nothing to do with marketing in the traditional sense.
The practical implication is that experience mapping cannot be a marketing exercise alone. It needs input from operations, customer service, product, and wherever else the customer actually encounters the brand. If the map is built entirely by the marketing team, it will reflect the parts of the experience the marketing team controls, which is a subset of the whole.
How to Measure the experience Without Fooling Yourself
Measurement of the customer experience is one of those areas where the appearance of rigour and actual rigour diverge most dramatically. Attribution models give you a version of the story. Conversion funnels show you where people drop off. Engagement metrics tell you something about attention. None of them, individually or together, give you a complete picture of what your customers are actually experiencing.
The honest position is that experience measurement requires a portfolio of approaches, each with its own limitations, and the skill is in triangulating across them rather than treating any single metric as definitive. Measuring customer satisfaction at multiple points in the experience gives you a texture that behavioural data alone cannot provide. Satisfaction at the point of purchase and satisfaction thirty days post-purchase are different signals and both matter.
There is a broader question of which metrics to prioritise, and the answer depends on what stage of the experience you are trying to understand. Top-of-funnel metrics like reach and brand recall tell you something about awareness. Mid-funnel metrics like consideration rates and time to conversion tell you something about the quality of your proposition. Post-purchase metrics like repeat purchase rate, lifetime value, and churn tell you whether the experience is delivering on the promise. Understanding which customer satisfaction metrics actually deserve your attention is not a trivial question, and the answer changes depending on your business model.
One thing I would caution against is over-indexing on the metrics that are easiest to collect. Click-through rates, open rates, and time-on-page are abundant and easy to report. They are also often the least commercially meaningful signals in the stack. The metrics that matter most, genuine satisfaction, intent to return, likelihood to recommend, tend to require more effort to collect and more care to interpret. That effort is worth making.
I have judged the Effie Awards, which means I have spent time evaluating campaigns against actual business outcomes rather than creative merit alone. The entries that stand out are almost always the ones where the brand has a clear, honest account of what changed in customer behaviour and why. The ones that struggle are the ones where the measurement is a post-rationalisation of activity rather than a genuine test of commercial impact. The same discipline applies to experience measurement. Be honest about what your data actually shows.
Technology’s Role in the experience: What It Can and Cannot Do
Technology has genuinely expanded what is possible in customer experience management. Personalisation at scale, real-time behavioural triggers, predictive churn modelling, automated onboarding sequences. These are real capabilities that were not available a decade ago, and they can make a material difference to the quality of the experience when deployed thoughtfully.
The risk is treating technology as a substitute for strategy. A customer engagement platform is only as good as the understanding of the customer that sits behind it. If you do not know what your customers value at each stage of the experience, automating touchpoints at scale just means sending the wrong messages faster. I have seen this play out more times than I care to count. Sophisticated martech stacks producing highly personalised communications that were irrelevant to the customer because the underlying segmentation was built on assumptions rather than evidence.
The sequence matters. Understand the experience first. Identify the moments that have the highest emotional and commercial significance. Then ask what technology can do to improve those moments. Starting with the technology and working backwards to the customer is a reliable way to spend significant budget without improving the experience.
There is also a question of what technology cannot do. It cannot replace genuine human judgment in high-stakes moments. It cannot compensate for a product that does not deliver on its promise. And it cannot manufacture trust. Trust is built through consistent, honest behaviour over time. Technology can support that process, but it cannot shortcut it.
Digital optimisation across the customer experience is a legitimate and valuable discipline. But optimisation without a clear view of what you are optimising for, and for whom, tends to produce incremental improvements to the wrong things. The question to ask before any optimisation programme is: what does a good outcome look like for the customer at this stage, not just for the business metric we are tracking?
The Friction Problem: Why Small Irritations Have Large Consequences
Friction in the customer experience is cumulative. A single small irritation rarely ends a relationship on its own. But friction compounds. A slightly confusing onboarding email, followed by a support response that took longer than expected, followed by a billing query that required three interactions to resolve, adds up to a customer who is quietly reassessing whether the relationship is worth maintaining. They may not be able to tell you exactly why they left. They just stopped feeling good about the brand.
This is why experience audits need to be granular. It is not enough to map the broad stages and declare them satisfactory. You need to look at the micro-moments within each stage, the specific interactions where friction is most likely to accumulate. The password reset that takes too many steps. The FAQ page that does not answer the question the customer actually has. The automated email that arrives at the wrong moment in the relationship. None of these are dramatic failures. Together, they erode the experience.
The brands that manage friction well tend to have a systematic approach to identifying it. They use behavioural data to find where customers abandon processes. They use qualitative research to understand why. They have a clear process for prioritising friction removal based on commercial impact rather than ease of fix. And they treat friction reduction as ongoing operational work, not a one-time project.
There is a useful analogy here with product quality. A product that almost works is not a product that works. A customer experience that is mostly smooth but has a few sharp edges is not a smooth experience. The edges are what customers remember, and what they tell other people about. The commercial cost of failing to meet customer expectations is real and measurable, even when it does not show up cleanly in standard reporting.
Building a experience That Actually Improves Over Time
The difference between organisations that have genuinely good customer journeys and those that merely have good intentions is usually a matter of operating rhythm. The good ones have built a cadence of review and improvement that keeps the experience current as customer expectations, competitive context, and business operations evolve. The ones with good intentions produce a experience map, file it, and return to it two years later when something has gone visibly wrong.
A sustainable approach to experience improvement has a few characteristics. First, it is cross-functional. Customer experience does not belong to marketing alone. It belongs to everyone who touches the customer, which in most organisations is a large number of people. The governance structure needs to reflect that. Someone needs to own the end-to-end view, with the authority to coordinate across functions when the experience requires it.
Second, it is evidence-based. Improvements should be driven by what the data and customer feedback are telling you, not by internal preferences or what is easiest to change. This requires a feedback infrastructure that is genuinely connected to decision-making rather than running in parallel to it. Collecting data about the experience and then not acting on it is one of the more common and more demoralising failures I have seen in marketing organisations.
Third, it is honest about trade-offs. Not every friction point can be eliminated. Some are inherent to the category. Some are the result of regulatory requirements. Some are the cost of operating at scale. The goal is not a frictionless experience, which is an unachievable fantasy, but a experience where the friction that exists is proportionate, explainable, and handled with genuine care for the customer.
Fourth, it is connected to commercial outcomes. experience improvements need to be evaluated against business metrics, not just customer satisfaction scores. A better onboarding experience should reduce early churn. A faster support response should improve retention rates. If the improvements are not showing up in the commercial numbers, either the improvements are not as significant as they appear, or the measurement is not capturing the right things. Either way, it is worth investigating.
I have spent time working across categories as different as financial services, retail, travel, and B2B technology. The companies with the strongest customer journeys in each of those categories shared a common characteristic: they treated the customer experience as a commercial asset, not a cost centre. They invested in understanding it, improving it, and measuring the return on that investment with the same rigour they applied to their advertising spend. That orientation makes a difference. It changes what gets prioritised, what gets resourced, and what gets fixed.
There is more depth on the tools and disciplines that support this kind of systematic approach across the full Customer Experience Hub, including how to structure feedback programmes, how to think about engagement platforms, and how to connect satisfaction measurement to real business decisions.
The customer experience is not a marketing concept. It is a business concept. The organisations that treat it as such tend to find that their marketing works harder, their customers stay longer, and their growth is more durable than those that treat it as a diagram to be produced and then forgotten. That is not a coincidence. It is what happens when the experience you promise and the experience you deliver are the same thing.
Understanding the experience in full, building the infrastructure to improve it, and having the honesty to measure what is actually happening rather than what you hope is happening, that is the work. It is less glamorous than a brand campaign and harder to attribute than a paid media programme. It is also, in my experience, where the most durable commercial value gets created.
And if you want a practical starting point: talk to your customers. Not in a survey that asks them to rate their satisfaction on a scale of one to ten, but in a conversation that tries to understand what their experience of your brand actually feels like. What you hear will be more useful than most of the data in your analytics platform. It will also be more uncomfortable. That discomfort is the signal. Follow it.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what actually works.
Frequently Asked Questions
What is a customer experience in marketing?
The customer experience is the complete sequence of interactions a person has with a brand, from first becoming aware of it through to purchase, ongoing use, and either continued loyalty or departure. It spans all channels and all touchpoints, including those the brand does not directly control, such as third-party reviews and peer recommendations. It is not a funnel or a linear path. Most customers move through it in ways that are non-linear, recursive, and shaped by context the brand has no visibility into.
What is the difference between a customer experience map and a sales funnel?
A sales funnel is a model of how leads move through a pipeline toward conversion. It is primarily a sales and marketing tool, focused on the organisation’s perspective on the process. A customer experience map is an attempt to represent the experience from the customer’s perspective, across all stages of the relationship and all channels, including what the customer feels at each stage, not just what they do. A funnel is a useful operational tool. A experience map is a strategic tool for understanding and improving the experience. They serve different purposes and should not be treated as interchangeable.
How do you identify where customers are dropping off in the experience?
A combination of approaches works better than any single method. Behavioural analytics, such as web analytics, session recordings, and funnel analysis tools, can show you where customers abandon processes or disengage. Cohort analysis of retention data can show you when customers tend to churn relative to their start date. Customer feedback at specific touchpoints, including post-purchase surveys and exit surveys, can tell you why customers are leaving rather than just where. The most useful picture comes from combining these data sources and looking for consistent patterns rather than treating any single metric as definitive.
What is the most commonly neglected stage of the customer experience?
Post-purchase onboarding is consistently underinvested relative to its commercial importance. Most marketing budgets are heavily weighted toward acquisition, which means the experience after a customer converts receives far less attention and resource than the experience before. Yet the quality of the onboarding and early post-purchase experience is one of the strongest predictors of whether a customer stays, returns, and recommends the brand. Fixing the post-purchase experience is often more commercially valuable than increasing acquisition spend, particularly in categories with high churn.
Can you map the customer experience without primary research?
You can produce a map without primary research, but it will reflect internal assumptions rather than customer reality, and that distinction matters enormously. experience maps built entirely from internal knowledge tend to show the experience the organisation intends to deliver rather than the experience customers actually have. They tend to be smoother, more rational, and more flattering to the brand than reality. Primary research, whether through customer interviews, usability testing, or structured feedback programmes, is what closes the gap between the intended experience and the actual one. If resources are constrained, even a small number of genuine customer conversations will improve the quality of a experience map significantly.
