Brand Positioning Research: What the Data Can and Cannot Tell You
Brand positioning research is the process of gathering structured evidence about how your brand is perceived relative to competitors, and using that evidence to inform positioning decisions. Done well, it reduces the guesswork in strategy. Done poorly, it creates a false sense of certainty that is worse than having no data at all.
The distinction matters because most positioning research is not bad in its execution. It is bad in its interpretation. The numbers are real. The conclusions drawn from them often are not.
Key Takeaways
- Brand positioning research reduces strategic guesswork, but only if the methodology is sound and the interpretation is honest.
- Perception gaps between how a brand sees itself and how audiences experience it are the most commercially useful finding in any positioning study.
- Qualitative research surfaces the language and logic customers use. Quantitative research tells you how many people think that way. You need both.
- Most positioning research fails not in data collection but in the boardroom, where inconvenient findings get softened or ignored.
- Research should inform positioning decisions, not make them. Judgment still matters more than any dataset.
In This Article
- What Is Brand Positioning Research Actually Measuring?
- Why Qualitative Research Comes First
- How to Structure Quantitative Positioning Research
- The Perception Gap: Where Positioning Research Gets Commercially Useful
- Competitor Benchmarking: Useful and Easily Abused
- Segmentation: Not All Perceptions Are Equal
- Tracking Studies: When to Run Them and When Not To
- The Research Report Is Not the Strategy
- What Good Brand Positioning Research Actually Produces
I have sat in enough strategy presentations to know the pattern. An agency presents research findings. The client nods along until something challenges their assumptions. Then the conversation shifts from “what does this tell us” to “I’m not sure our customers really meant that.” The research does not fail. The process around it does.
What Is Brand Positioning Research Actually Measuring?
Before running any research, it helps to be precise about what you are trying to measure. Brand positioning research typically covers three things: awareness, association, and differentiation. These are related but not the same, and conflating them produces muddled briefs and muddled findings.
Awareness tells you whether people know you exist. Association tells you what they connect to your name when they do. Differentiation tells you whether those associations are distinct enough to create preference. You can have high awareness with weak association, which is common in commoditised categories. You can have strong association that is not differentiated, which is common in markets where everyone claims the same territory.
When I was growing the agency, we did a positioning exercise that revealed something uncomfortable. We had strong awareness in our core market, reasonable association with performance and delivery, but almost no differentiated positioning versus the three other agencies our prospects were also considering. Everyone said roughly the same things about us. That finding was more useful than any amount of positive brand sentiment data, because it told us exactly where to focus.
If you want a broader view of how positioning connects to the full strategy picture, the work on brand positioning and archetypes at The Marketing Juice covers the strategic framework that research should feed into.
Why Qualitative Research Comes First
There is a persistent temptation to go straight to a survey. Surveys feel rigorous. They produce numbers. Numbers feel like evidence. But if you run a survey before you understand the language your customers use, you end up measuring your own assumptions, not their reality.
Qualitative research, whether that is depth interviews, focus groups, or ethnographic observation, does something surveys cannot. It surfaces the mental models, the vocabulary, and the logic that customers use when they think about your category. That vocabulary matters enormously. When you write survey questions using your own language rather than theirs, you introduce bias before a single respondent has clicked submit.
I have seen this play out in B2B contexts particularly. A client in professional services ran a positioning survey using language from their own website. The findings looked clean. Then we ran a round of depth interviews and discovered that the core value proposition they were measuring, “end-to-end integrated solutions,” meant nothing to their buyers. The buyers thought in terms of risk reduction and speed to implementation. The survey had been measuring a concept that did not exist in the customer’s head.
Qualitative research is not the enemy of quantification. It is the prerequisite for it. Run six to ten depth interviews with customers and prospects before you design a single survey question. You will write better questions, and you will measure things that actually matter to the people you are trying to understand.
How to Structure Quantitative Positioning Research
Once you have the qualitative foundation, quantitative research lets you understand scale. How many people hold a particular perception? How does that vary by segment, by geography, by purchase stage? These are questions that interviews cannot answer reliably.
The standard approach is a brand perception survey run across a representative sample of your target audience, including both customers and non-customers. The core measures typically include unaided and aided awareness, attribute association, competitive comparison, and net promoter or preference metrics.
A few things I insist on when reviewing research methodology. First, sample size needs to be sufficient for the analysis you want to run. If you plan to cut the data by segment, each segment needs to be large enough to be statistically meaningful. A total sample of 200 respondents that you then split into four segments gives you 50 per segment, which is not enough to draw reliable conclusions from most attribute questions.
Second, the difference between two numbers needs to be tested, not assumed. If 43% of respondents associate your brand with “trusted expertise” and 39% associate a competitor with the same attribute, that is not a meaningful lead. It is noise. I have judged enough Effie submissions to know how often statistically insignificant differences get presented as strategic wins. Do not let that happen in your research.
Third, be careful with agreement scales. A five-point Likert scale where 60% of respondents agree or strongly agree that your brand is “innovative” sounds positive until you check the same figure for your competitors and find it is 58%. The absolute number is less informative than the relative position.
Wistia has written thoughtfully about the problem with focusing exclusively on brand awareness metrics, which is worth reading if your research brief is currently built entirely around awareness measurement.
The Perception Gap: Where Positioning Research Gets Commercially Useful
The most valuable output from brand positioning research is not a ranking of how well-known you are. It is the gap between how you believe you are positioned and how your audience actually experiences you.
That gap has commercial consequences. If you believe you are positioned as a premium, specialist provider but your customers describe you in terms that suggest you are a generalist with competitive pricing, your sales team is having the wrong conversation. Your marketing is investing in a position that does not yet exist in the market. Your pricing strategy may be misaligned with perceived value.
To find this gap, you need to run the same attribute questions internally before you field them externally. Ask your leadership team, your sales team, and your marketing team to rate the brand on the same dimensions you will ask customers to rate. The divergence is instructive. In my experience, internal teams consistently overestimate distinctiveness and underestimate how much work still needs to be done to own a position in the market.
BCG’s work on what drives brand recommendation is a useful reference point here, particularly the finding that the brands most likely to be recommended are not always the most well-known, but the ones with the clearest and most consistent identity in their category.
Competitor Benchmarking: Useful and Easily Abused
Positioning research almost always includes a competitive dimension, and that is right. Brand positioning is inherently relative. You do not occupy a position in isolation. You occupy one relative to the alternatives your customers are considering.
The practical question is which competitors to include. The instinct is to benchmark against the names you know. But the brands you compete with in a pitch are not always the brands that occupy the same mental space as you in a customer’s head. Ask customers early in the qualitative phase who they would consider alongside you. The answers sometimes surprise people.
When I was running the agency, we assumed our competitive set was three or four other mid-sized independent agencies. The qualitative research told a different story. Several clients were also considering taking work in-house or using freelance networks. That was not a competitor in the traditional sense, but it was competing for the same budget and the same decision. Ignoring it in the research would have given us an incomplete picture of the market we were actually operating in.
Moz has done interesting work on how brand equity operates in competitive contexts, including a detailed look at how brand equity can both protect and constrain a brand depending on how tightly associated it becomes with a single idea. The same principle applies to how you interpret competitive positioning data.
Segmentation: Not All Perceptions Are Equal
Aggregate brand perception data can be misleading. A brand that scores well overall may be scoring very differently across the segments that actually matter commercially. If your highest-value customers perceive you in a way that is fundamentally different from your average customer, the aggregate number obscures the strategic reality.
This is particularly relevant in B2B, where the person who uses your product, the person who evaluates it, and the person who signs off the budget may all have different perceptions of your brand. A positioning study that only surveys one of those three groups is giving you a partial picture.
Segment your research by at least three dimensions: customer versus prospect, high-value versus standard account, and where possible, decision-maker versus end-user. The differences between these groups often contain more strategic insight than the headline numbers.
Consumer brand loyalty research has shown repeatedly that loyalty patterns shift significantly by segment, particularly under economic pressure. The MarketingProfs analysis of how brand loyalty changes during recession is a useful reminder that aggregate loyalty scores can mask very different behaviour in different customer groups.
Tracking Studies: When to Run Them and When Not To
A single wave of brand research gives you a snapshot. A tracking study run at regular intervals gives you movement. Both have their place, but tracking studies are often oversold as a necessity when a single strong study would serve the business better.
Tracking makes sense when you are running significant brand investment and need to measure whether it is shifting perception. It also makes sense in categories with high competitive activity, where the landscape is changing quickly enough that a twelve-month-old study is already outdated.
It does not make sense when the budget for tracking would be better spent on a single deeper study, or when the business is not running enough brand activity to expect meaningful movement between waves. I have seen tracking studies run quarterly for brands that were doing almost no brand-level marketing. The data showed almost no change, wave after wave, because there was nothing driving change. The research budget was being spent to confirm inertia.
If you are going to track, decide in advance what movement would be meaningful and what would constitute noise. A two-point shift in brand association is not a strategic signal. A sustained eight-point shift over three waves probably is. Set those thresholds before the data comes in, not after, when the temptation to interpret in your favour is strongest.
HubSpot’s overview of consistent brand voice is relevant here, because one of the most common reasons tracking evidence suggests no movement is that the brand is not consistent enough in its communication to build cumulative perception over time.
The Research Report Is Not the Strategy
This is where most positioning research projects lose their value. The data gets collected, the agency or research firm produces a report, and the report sits in a shared drive while the business continues largely as before. The research becomes a box-ticking exercise rather than a strategic input.
The reason this happens is usually that the research was not connected to a decision from the start. If you cannot name the specific strategic or commercial decisions the research is designed to inform, you are not ready to commission it. Research that is not tied to a decision is an expensive way to feel thorough.
Before briefing any research, write down three to five decisions that the findings will directly influence. Positioning statement, yes or no. Message hierarchy for the next twelve months. Which segments to prioritise in brand investment. Whether to extend into a new category or double down on the current one. These are the kinds of decisions that research should be built around. If you cannot write that list, the brief is not ready.
The other failure mode is selective reading. Findings that confirm existing strategy get amplified. Findings that challenge it get qualified, contextualised, or quietly dropped. I have been in rooms where a research debrief went from uncomfortable to comfortable in real time as the framing shifted. The data did not change. The interpretation did. That is not strategy. That is expensive confirmation bias.
Brand loyalty research from Moz on local brand loyalty illustrates how even granular, well-designed research requires honest interpretation to be commercially useful, particularly when the findings challenge assumptions about what is driving customer retention.
Positioning research is one part of a broader strategic discipline. The full picture of how brand strategy connects to commercial performance, audience insight, and long-term brand building is covered across the brand strategy hub at The Marketing Juice, which is worth working through if you are building or refreshing a positioning approach from the ground up.
What Good Brand Positioning Research Actually Produces
When it works, brand positioning research produces four things that are genuinely useful. First, a clear picture of where you sit in the competitive landscape from the customer’s perspective, not yours. Second, the language customers use to describe the category and your role in it, which feeds directly into messaging and copy. Third, the gaps between your current perception and the position you want to own, which defines the work to be done. Fourth, a baseline against which future brand investment can be measured.
None of those outputs tell you what your positioning should be. That is a judgment call that requires commercial understanding, competitive insight, and a view on where the market is heading. Research informs that judgment. It does not replace it.
The brands that use positioning research well treat it as a recurring input to strategy, not a one-time deliverable. They run it with enough rigour to trust the findings, enough honesty to act on the uncomfortable ones, and enough commercial sense to connect it to decisions that actually affect the business.
That combination is rarer than it should be. But when it exists, the research pays for itself many times over, not because it tells you something you could not have guessed, but because it gives you the confidence to act on what you already suspected and the evidence to bring others with you.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
