Persona History: Where Customer Profiles Go Wrong
Persona history is the record of how customer profiles have been built, revised, and often quietly ignored across the life of a marketing strategy. Understanding that history matters because most personas in circulation today were built once, filed away, and never seriously interrogated again, even as the markets they were meant to describe shifted underneath them.
The problem is not the concept. The problem is what organisations have done with it.
Key Takeaways
- Personas were originally designed as a design tool, not a marketing shortcut, and that origin shapes why so many marketing personas fail in practice.
- Most persona documents are built from assumption, not evidence, and the gap between the two tends to widen over time without anyone noticing.
- A persona that has not been revisited in 12 months is not a living document. It is a historical artefact dressed up as strategy.
- The value of a persona is not in its creation but in the discipline of maintaining it against real customer data.
- Organisations that treat personas as fixed outputs rather than working hypotheses consistently make worse targeting and messaging decisions.
In This Article
- Where Did Personas Actually Come From?
- What Went Wrong in the Agency World
- The Evolution of Persona Methodology
- Why Persona Decay Is a Bigger Problem Than Persona Creation
- The Assumption Problem at the Core of Most Personas
- What Good Persona Practice Actually Looks Like
- Personas in Go-To-Market Planning
- The Persona as a Working Hypothesis, Not a Finished Answer
Where Did Personas Actually Come From?
The formal concept of the marketing persona is most commonly attributed to Alan Cooper, a software designer who developed the idea in the 1980s as a way to keep design decisions grounded in real user behaviour. His original intent was practical and specific: if you are building a product, you need a concrete mental model of who is using it, not an abstract average. He called these constructs personas and used them to argue against designing for a fictional “elastic user” who conveniently adapts to whatever the designer builds.
Cooper’s personas were rooted in field research. He interviewed real people, observed real behaviour, and built profiles that were meant to reflect genuine patterns rather than internal assumptions. The persona was a synthesis of evidence, not a creative exercise.
Marketing adopted the concept, and somewhere in that adoption, the rigour got left behind.
By the early 2000s, personas had become a standard deliverable in brand strategy and digital marketing. Agencies produced them. Consultants charged for them. PowerPoint decks were full of them. But the research discipline that made them useful in Cooper’s hands was often replaced with a workshop afternoon and a round of internal voting on which customer archetype felt most accurate. The persona became a consensus document, not an evidence document.
What Went Wrong in the Agency World
I have been in a lot of rooms where personas were presented as foundational strategy. I have also been in the rooms where those same personas were quietly set aside six months later because the campaigns built around them were not performing. The two conversations rarely connected. Nobody went back and asked whether the persona was wrong. The assumption was always that execution had failed, not the underlying model of the customer.
When I was running iProspect and we were scaling the team from around 20 people to over 100, one of the things that became obvious very quickly was how much inherited thinking sat inside client briefs. Personas were a prime example. A client would hand over a persona document that was three years old, built by a previous agency, and treat it as current intelligence. We would build campaigns against it, and then wonder why the signals coming back from actual audience data told a different story.
The persona said the target customer was a 35-to-44-year-old professional making considered purchase decisions. The actual conversion data showed a much broader age spread, a higher proportion of mobile-first behaviour than the persona anticipated, and a decision cycle that was significantly shorter than the profile suggested. The persona was not wrong about everything. It was just wrong enough to matter.
This is a structural problem with how personas get institutionalised. Once they are written down and approved, they acquire a kind of authority that makes them hard to challenge. Challenging the persona feels like challenging the strategy. So people work around it instead.
If you are building or rebuilding your go-to-market approach, the broader thinking on go-to-market and growth strategy at The Marketing Juice covers how persona work fits into a commercially grounded planning process.
The Evolution of Persona Methodology
Persona methodology has evolved in three reasonably distinct phases, and it is worth understanding each one because the weaknesses of earlier approaches tend to persist in organisations that have not consciously updated their practice.
Phase One: The Qualitative Foundation
The first generation of marketing personas was built almost entirely on qualitative research. Focus groups, interviews, ethnographic observation. The outputs were rich and often genuinely insightful, but they were expensive to produce and slow to update. For a brand with a stable product in a stable market, that was manageable. For anyone operating in a faster-moving environment, the lag between research and reality was a constant problem.
The other limitation of pure qualitative personas is that they reflect what people say, not necessarily what people do. Customers are not reliable narrators of their own behaviour. They rationalise, they present themselves favourably, and they describe the decision process they think they went through rather than the one that actually happened. A persona built on self-reported data is a portrait of the customer’s self-image, not their behaviour.
Phase Two: The Data Layer
The second phase arrived with digital analytics. Suddenly there was behavioural data at scale: what pages people visited, what they clicked, how long they stayed, where they dropped off. Personas started incorporating this layer, and the result was often more accurate in narrow ways while remaining incomplete in broader ones.
Digital behavioural data tells you what happened, not why. It tells you that a segment of users consistently abandons checkout at the payment screen, but it does not tell you whether that is because of price sensitivity, a trust deficit, a technical problem, or simple distraction. The data layer improved persona accuracy on observable behaviour while leaving motivational gaps that qualitative work had at least attempted to fill.
Tools like Hotjar’s feedback and session recording capabilities represent a genuine step forward here because they start to bridge that gap, connecting what users do with some signal of why. But even the best behavioural analytics tool is still a perspective on reality, not reality itself. I have spent enough time with analytics dashboards to know that the same data can support multiple contradictory interpretations, and the one that gets chosen is usually the one that confirms what the team already believed.
Phase Three: The Segmentation Overlap
The third phase, which is roughly where most sophisticated marketing organisations sit now, involves blending qualitative insight, behavioural data, and first-party CRM data into something more dynamic. The persona starts to look less like a fixed portrait and more like a cluster of attributes that can be tested and refined against actual campaign performance.
This is closer to how personas should work, but it creates its own complications. When personas become data-driven clusters, they can lose the human legibility that made them useful as communication tools in the first place. A statistical segment is not the same as a persona. One tells you who to target. The other tells you how to talk to them. Conflating the two produces targeting efficiency without messaging clarity.
Why Persona Decay Is a Bigger Problem Than Persona Creation
Most organisations spend more energy on building personas than on maintaining them. This is the wrong allocation.
A persona built well from strong research will be reasonably accurate at the moment of creation. But markets move. Customer expectations shift. Competitive alternatives emerge. The behaviours and motivations that defined a customer segment in one period may look quite different two years later, particularly in categories where digital behaviour is a significant part of the purchase experience.
I have judged the Effie Awards, which means I have read a lot of cases where brands describe their customer understanding in detail. The ones that stand out are not the ones with the most elaborate persona documents. They are the ones where the brand can demonstrate that their understanding of the customer changed over time, and that their strategy changed with it. That kind of adaptive intelligence is rare, and it is almost never the result of a persona built once and left to stand.
Persona decay happens in several ways. The most obvious is simple neglect: the document gets filed and nobody looks at it. But there is a subtler version that is harder to catch. The persona stays nominally active but becomes a reference document rather than a working tool. Teams cite it in briefs without interrogating it. It provides the language of customer understanding without the substance.
The discipline required to prevent this is not glamorous. It involves scheduled reviews, clear ownership, and a willingness to sit with the discomfort of not knowing whether your current model of the customer is accurate. Most organisations prefer the comfort of certainty, even when that certainty is borrowed from a three-year-old workshop.
The Assumption Problem at the Core of Most Personas
When Dentsu came to me with a pitch for AI-driven personalised creative, the headline numbers were extraordinary: dramatic CPA reductions, significant conversion uplifts. My first question was not about the technology. It was about the baseline. What were they comparing against? It turned out the control creative was genuinely poor. They had not validated the customer understanding that informed the original creative. They had just replaced one set of assumptions with a slightly better-informed set of assumptions and called it a technology success.
The same dynamic plays out in persona work constantly. Organisations build personas from internal assumptions, run campaigns against them, see mediocre results, and conclude that execution was the problem. The possibility that the customer model was wrong from the start does not get seriously examined because it would mean revisiting decisions that already have organisational weight behind them.
This is where the history of persona methodology is actually instructive. Cooper’s original work was built on the premise that assumptions about users are dangerous, that you have to go and observe real behaviour before you can build a useful model. That premise got lost when personas moved from product design into marketing, and the cost of that loss is visible in the quality of customer understanding that underpins most campaigns.
The mechanics of market penetration depend heavily on accurate customer understanding. If you are targeting the wrong profile, or targeting the right profile with the wrong message because your persona misrepresents their motivations, you are not penetrating the market. You are spending money to confirm your own assumptions.
What Good Persona Practice Actually Looks Like
Good persona practice is not about producing a better-looking document. It is about building a disciplined process for testing and updating your model of the customer against real evidence.
That means a few specific things in practice.
First, it means being explicit about what is known versus what is assumed. Most persona documents present everything with equal confidence, which means the reader cannot distinguish between an insight derived from 200 customer interviews and a guess that felt plausible in a workshop. Separating those two things is uncomfortable but necessary. It forces the organisation to identify where the gaps in its customer knowledge actually are.
Second, it means connecting persona attributes to observable signals. If your persona says the target customer is price-sensitive, what does that look like in your conversion data? If it says they are research-led decision makers, what does the content consumption pattern look like before purchase? Personas that cannot be connected to observable signals are not testable, and anything that cannot be tested cannot be improved.
Third, it means building in a review cadence. Not an annual strategy refresh where someone checks whether the persona still feels right. A structured review that brings in fresh data: customer interviews, CRM analysis, campaign signal, competitive intelligence. The frequency depends on how fast the market moves, but for most categories, quarterly is the minimum that makes sense.
The BCG work on scaling agile practices is relevant here because it addresses how organisations build the kind of iterative review discipline that good persona maintenance requires. The challenge is not methodological. It is organisational. Getting teams to treat customer understanding as a hypothesis rather than a settled fact requires a different kind of culture than most marketing functions have built.
Fourth, and perhaps most importantly, it means giving someone clear ownership. Personas without an owner decay. They become shared documents that nobody feels responsible for keeping current. Assigning ownership does not have to mean a dedicated role. It means someone whose job it is to notice when the persona is drifting from reality and to do something about it.
Personas in Go-To-Market Planning
The place where persona quality has the most direct commercial consequence is go-to-market planning. When you are entering a new market or launching a new product, your persona is doing some of the heaviest lifting in the entire strategy. It is informing channel selection, message architecture, pricing assumptions, and sales enablement. If it is wrong, those decisions compound the error.
I have been involved in enough product launches to know that the ones that struggle most are rarely the ones with the weakest product. They are the ones where the team had a confident but inaccurate picture of who they were selling to and why that customer would buy. The product might be genuinely good. The go-to-market falls apart because it is built on a customer model that does not hold up under contact with the actual market.
The BCG analysis of product launch strategy makes the point that customer understanding at launch is not just a marketing input. It is a commercial risk factor. Getting it wrong does not just mean lower conversion rates. It means misallocated investment, delayed market feedback, and a longer path to finding the positioning that actually works.
Creator partnerships in go-to-market contexts are an interesting case here. The reason they work when they work is that a well-chosen creator already has a validated relationship with a specific audience. Their persona knowledge, built through direct interaction with their community, is often more current and more accurate than anything sitting in a brand’s strategy deck. Later’s research on creator-led go-to-market campaigns points to how brands that lean into creator audience intelligence, rather than just using creators as distribution channels, tend to see better results.
The pattern holds because the creator’s audience understanding is continuously refreshed through engagement. It does not decay in the way that an internal persona document does. There is a lesson there about the value of maintaining live connections to actual customers rather than relying on periodic research snapshots.
For a fuller picture of how persona work connects to channel strategy, audience targeting, and growth planning, the go-to-market and growth strategy hub at The Marketing Juice brings these threads together in a commercially grounded way.
The Persona as a Working Hypothesis, Not a Finished Answer
The most useful reframe I have found for persona work is to treat every persona as a hypothesis rather than a conclusion. A hypothesis is something you test. It can be right, partially right, or wrong. It gets updated when evidence contradicts it. It does not acquire institutional authority just because it was written down and approved.
That reframe changes how you build personas, how you use them, and how you respond when campaign data does not match what the persona predicted. Instead of assuming execution failed, you ask whether the model was accurate. That is a harder question to sit with, but it is the right one.
Growth tactics built on inaccurate customer models tend to plateau quickly. The Crazy Egg analysis of growth strategy identifies customer understanding as a foundational input, not a downstream consideration. The sequence matters: you do not build growth strategy and then figure out who the customer is. You start with the customer and build strategy from there. Persona work sits at the beginning of that sequence, which is exactly why its quality, and its currency, determines so much of what follows.
My first week at Cybercom, I was handed a whiteboard marker in the middle of a Guinness brainstorm when the founder had to leave for another meeting. The internal reaction was somewhere between panic and determination. What I remember most clearly is that the brief was built around an assumed customer: the traditional Guinness drinker, characterised in ways that felt more like brand mythology than actual insight. The most interesting work that came out of that session came from questioning whether that assumed customer was still the right target at all. The persona was doing the work of a settled answer when it should have been doing the work of a provocation.
That is the real lesson from the history of persona methodology. The concept was always meant to sharpen thinking, not replace it. When personas become received wisdom rather than working tools, they stop doing the job they were built for.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
