Brand Positioning Research: What the Data Can and Cannot Tell You
Brand positioning research is the process of gathering structured evidence about how your brand is perceived relative to competitors, and using that evidence to sharpen or redefine where you compete. Done well, it reduces the guesswork in positioning decisions and surfaces gaps between what you intend to communicate and what audiences actually take away. Done badly, it produces a thick deck of findings that confirms what leadership already believed and changes nothing.
The difference between those two outcomes is rarely about budget. It is about methodology, intellectual honesty, and the willingness to act on what you find, even when it is uncomfortable.
Key Takeaways
- Brand positioning research is only valuable if the methodology is rigorous enough to distinguish genuine insight from statistical noise.
- Perception gaps, not brand awareness scores, are the most actionable output from positioning research.
- Qualitative and quantitative research serve different purposes: qual generates hypotheses, quant tests them at scale.
- Most positioning research fails not in the data collection phase but in the interpretation phase, where confirmation bias does the most damage.
- Research should inform positioning decisions, not make them. Judgment, commercial context, and competitive reality all matter more than a single data point.
In This Article
- Why Most Brand Positioning Research Produces Weak Outputs
- What Brand Positioning Research Should Actually Measure
- Qualitative vs Quantitative: Getting the Sequence Right
- The Methodology Questions Nobody Asks Loudly Enough
- How to Use Positioning Research Without Being Captured by It
- Tracking Studies: When Continuous Measurement Makes Sense
- The Consistency Problem in Longitudinal Research
- Integrating Research Into Positioning Decisions
If you are working through a broader positioning challenge, the articles in the Brand Positioning and Archetypes hub cover the full strategic picture, from messaging architecture to competitive differentiation. This article focuses specifically on the research layer: what to measure, how to measure it, and how to avoid the traps that make positioning research expensive but useless.
Why Most Brand Positioning Research Produces Weak Outputs
I have reviewed a lot of brand research over the years, both as a client and as someone who has commissioned it. A pattern emerges quickly. The research tends to be thorough on inputs and thin on interpretation. You get a clean presentation with colour-coded charts, a competitive perceptual map, and a set of attribute ratings. What you rarely get is a clear answer to the question that actually matters: what should we change, and why?
Part of this is a structural problem. Research agencies are incentivised to deliver findings, not recommendations. Strategy consultants are incentivised to recommend repositioning whether or not the data supports it. In-house teams are incentivised to present research that validates the current direction. Nobody in the room is purely incentivised to tell you the truth.
When I was building out the strategy function at iProspect, we had a habit of commissioning research and then spending more time debating the methodology than acting on the results. That was not always wasted time. Scrutinising how a question was framed, whether the sample was representative, and whether a 3-point difference in attribute ratings was statistically meaningful or just noise, that scrutiny saved us from making expensive decisions based on thin evidence. But it also meant we needed to be clear upfront about what we were trying to learn and what decision the research was meant to inform. Without that clarity, research becomes an exercise in generating interesting-looking data rather than resolving a real business question.
What Brand Positioning Research Should Actually Measure
There are four things worth measuring in a positioning research programme. Most brands measure only one or two of them and wonder why the outputs feel incomplete.
1. Unaided and Aided Brand Awareness
Awareness is the foundation. You cannot be considered if you are not known. Unaided recall tells you whether your brand comes to mind spontaneously in a category context. Aided awareness tells you whether people recognise you when prompted. Both matter, and the gap between them is often more instructive than either figure in isolation. A brand with high aided but low unaided awareness has a salience problem, not a recognition problem. Those require different responses.
Awareness data on its own is descriptive, not diagnostic. It tells you where you stand, not why you stand there or what to do about it. Research into local brand loyalty consistently shows that awareness and loyalty do not move in lockstep. A brand can be well known and still fail to convert that recognition into preference. Awareness is a necessary condition for positioning to work, not a sufficient one.
2. Brand Attribute Associations
Attribute research asks respondents to rate brands on a set of dimensions: innovative, trustworthy, good value, premium, customer-focused, and so on. The output is usually a perceptual map that shows where your brand sits relative to competitors on the dimensions that matter most to the category.
The quality of this research depends almost entirely on the quality of the attribute list. If you use generic attributes that apply to every brand in every category, you will get generic results. The attributes need to be grounded in what actually drives purchase decisions in your specific market, which means the qualitative phase needs to happen before the quantitative phase, not as an afterthought.
It is also worth being honest about what perceptual maps can and cannot show. They are a snapshot of current perception, not a map of opportunity. A gap in the perceptual space is not automatically a viable positioning territory. It might be empty because no brand has claimed it, or it might be empty because customers do not value it. Those are very different situations.
3. Perception Gaps
This is where positioning research gets genuinely useful. A perception gap is the distance between how your brand intends to be perceived and how it is actually perceived. Measuring this requires you to first articulate your intended positioning clearly, which is harder than it sounds for most organisations.
I have sat in workshops where senior leadership teams cannot agree on what their brand stands for. Not because they are confused, but because different functions have quietly evolved different versions of the positioning over time. Marketing says one thing, sales says another, the product team has its own narrative. By the time you run the customer research, you are not measuring the gap between positioning and perception. You are measuring the gap between several competing internal narratives and perception. The research becomes a proxy for an internal alignment problem that should have been resolved before the fieldwork started.
Perception gap data is the most actionable output from a positioning research programme. It tells you specifically where communication is failing, where product reality is not matching brand promise, and where there is genuine alignment that you can build on.
4. Competitive Positioning
Positioning is inherently relative. You are not positioned in the abstract. You are positioned in relation to every other option your customer is considering. This means the research needs to include competitors, and it needs to include the right competitors, which is not always the same as your internal list of who you compete against.
Customers often define the competitive set differently from brands. A premium gym might see itself competing with other premium gyms. Its members might see it competing with running apps, personal trainers, and the option of doing nothing. If your research only benchmarks you against the brands you have identified as competitors, you are missing the full picture of the decision environment your customers are handling.
BCG’s analysis of recommended brands found that the brands most likely to be recommended tended to occupy a clear and differentiated position in the minds of their customers, not just a positive one. That distinction matters for how you design the competitive component of your research.
Qualitative vs Quantitative: Getting the Sequence Right
Qualitative research and quantitative research answer different questions. Qualitative research, typically depth interviews or focus groups, is exploratory. It helps you understand how people think about a category, what language they use, what their decision-making process looks like, and what associations they hold about brands. It generates hypotheses. It does not validate them.
Quantitative research tests those hypotheses at scale. It tells you how prevalent an attitude or association is, whether differences between segments are meaningful, and whether your brand’s position has shifted over time. It validates. It does not explore.
The most common mistake I see is running them in the wrong order, or running only one. Quantitative-only research on brand positioning tends to produce data that is statistically reliable but interpretively shallow. You know that 43% of respondents associate your brand with “reliability” but you do not know what reliability means to them, whether it is a hygiene factor or a differentiator, or whether it translates into purchase intent. Qualitative-only research produces rich insight but you cannot know whether the views you heard in eight interviews represent 8% or 80% of your market.
The sequence matters: qual first to surface the right questions and the right language, then quant to test at scale. Reversing that sequence, or skipping qual entirely because it feels too soft, is a false economy. You end up measuring the wrong things with great precision.
The Methodology Questions Nobody Asks Loudly Enough
I developed a habit early in my career of asking uncomfortable questions about research methodology before accepting findings at face value. Not to be difficult, but because I had seen too many decisions made on the basis of research that looked authoritative and turned out to be flawed.
The questions worth asking consistently are these. Who was sampled, and does that sample actually represent your target audience? Online panels are convenient and relatively cheap, but they skew toward people who regularly complete surveys, which is not the same as your customer base. A B2B positioning study that samples procurement managers from a general business panel is not the same as one that samples procurement managers from your actual prospect universe.
Are differences between groups statistically significant? A brand scoring 6.2 on a trust attribute versus a competitor scoring 6.5 on a seven-point scale, with a sample of 200 respondents per group, may not be a meaningful difference at all. The number looks precise. That precision is misleading if the confidence interval swallows the gap.
How were questions framed? Leading questions in brand research are more common than they should be. If you ask respondents whether they agree that your brand is “committed to customer service” on a scale of one to seven, you will get different results than if you ask them to rate your brand on a set of attributes without priming them first. The framing of the question shapes the answer.
And finally: is the research designed to surface uncomfortable truths, or to confirm a direction that has already been decided? I have seen research briefs written in a way that made it structurally difficult for the findings to challenge the existing strategy. That is not research. That is expensive validation theatre.
How to Use Positioning Research Without Being Captured by It
Research informs positioning. It does not determine it. This distinction matters more than it might seem.
There is a category of positioning decision that research cannot make for you. Whether to defend your current position or shift it. Whether to pursue a segment that is currently underserved or double down on your core. Whether a perception gap is best closed by changing communication or changing product. These are judgment calls that require commercial context, competitive intelligence, and a clear view of where the business needs to go. Research can reduce uncertainty around those decisions. It cannot eliminate it, and it should not be treated as a substitute for strategic thinking.
BCG’s research on customer experience and brand strategy makes the point that brand perception is shaped by a combination of direct experience, word of mouth, and communication. Research can measure the outputs of that system. It cannot, on its own, tell you which lever to pull.
The brands that use positioning research well tend to do three things consistently. They go into the research with a specific decision they need to make, rather than a general desire to understand their brand better. They treat the findings as one input among several, alongside competitive analysis, customer behaviour data, and commercial performance. And they act on what they find, even when the findings are inconvenient.
That last point is harder than it sounds. I have seen research findings presented to leadership teams where the data clearly indicated a positioning problem, and the response was to commission more research. Sometimes that is the right call. More often it is avoidance dressed up as rigour.
Tracking Studies: When Continuous Measurement Makes Sense
A one-time positioning study gives you a snapshot. A tracking study gives you a trend line. The distinction matters when you are actively investing in repositioning or when your competitive environment is shifting quickly enough that a single-point measurement becomes stale before you can act on it.
Tracking studies measure the same set of brand health metrics, awareness, attribute associations, consideration, preference, at regular intervals, typically quarterly or biannually. Over time, they tell you whether your positioning investments are moving perception in the intended direction, and at what rate.
The practical challenge is cost. A well-designed tracking study with a meaningful sample size, run quarterly across a competitive set of five or six brands, is a significant investment. For large brands with substantial marketing budgets, that investment is proportionate. For smaller brands or those in the early stages of a repositioning, the money might be better spent on a single strong study and a clear action plan, with a follow-up study twelve to eighteen months later to measure movement.
Brand equity tracking is also worth approaching with some scepticism about what the metrics actually represent. Moz’s analysis of brand equity measurement highlights the gap between brand equity scores and commercial outcomes. A brand can show strong equity metrics and still be losing market share, because equity measures perception, not behaviour. The two are related but not identical.
If you are running a tracking study, make sure at least some of the metrics you track connect to commercial outcomes, not just perceptual ones. Consideration and purchase intent are closer to behaviour than awareness and attribute ratings. They are not perfect proxies, but they are more actionable.
The Consistency Problem in Longitudinal Research
One underappreciated challenge in brand positioning research is maintaining methodological consistency over time. If you change your questionnaire, your sample source, your fieldwork timing, or your analytical approach between waves of a tracking study, you cannot reliably attribute changes in the data to changes in brand perception. You might just be measuring the effect of the methodology change.
This sounds obvious, but it happens more often than it should. Teams change. Agency relationships change. Budgets get cut and corners get trimmed. Someone decides to add a new competitor to the tracking set without thinking through how that changes the frame of reference for respondents. Someone else decides to move from phone interviewing to online because it is cheaper, without considering how mode effects might influence the results.
Consistency in tracking methodology is not exciting. It is also not optional if you want the data to mean anything over time. Document your methodology in enough detail that it can be replicated exactly, and treat any proposed changes as a significant decision that requires careful thought about the implications for data continuity.
For teams thinking about how brand consistency connects to broader communication strategy, HubSpot’s analysis of consistent brand voice is a useful complement to the measurement side of this question. Consistency in research methodology and consistency in brand expression are different disciplines, but they are both in service of the same goal: knowing whether what you are doing is working.
Integrating Research Into Positioning Decisions
The final step in any positioning research programme is translation: taking what the data shows and connecting it to a clear positioning brief that can guide creative development, messaging architecture, and channel strategy.
This translation step is where most research programmes lose momentum. The findings get presented, the deck gets filed, and six months later the organisation is still operating on the old positioning because nobody translated the research into a concrete brief. The research was not wrong. It was just never connected to action.
A positioning brief informed by research should answer four questions. What do we currently own in the minds of our target audience? What do we want to own, and why is that territory both credible and differentiated? What is the gap between current and intended perception, and what is driving it? And what needs to change, in communication, in product, or in both, to close that gap?
Those four questions are not complicated. Answering them honestly, with the research as evidence rather than decoration, is where the real work happens. The brands that do this consistently are the ones that build genuine brand awareness over time, rather than just generating data about it.
If you are working through positioning decisions at a more fundamental level, the Brand Positioning and Archetypes hub covers the strategic frameworks that sit beneath the research layer, including how to define a positioning territory, how to stress-test differentiation claims, and how to build a messaging architecture that holds up across channels and audiences.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
