Brand Market Research: What It Should Tell You and Rarely Does
Brand market research is the practice of gathering structured data about how a brand is perceived, remembered, and chosen relative to competitors. Done well, it tells you whether your marketing is actually working at the level that matters most: what people think and feel when they encounter your brand.
Done poorly, it tells you what your stakeholders want to hear. And in my experience, the latter is far more common than the industry likes to admit.
This article covers what brand research should actually measure, where the methodology typically breaks down, and how to get genuinely useful intelligence rather than expensive validation.
Key Takeaways
- Brand market research is only useful if it measures what drives purchase decisions, not just what makes people feel good about your brand.
- Most brand tracking studies are designed to confirm existing strategy rather than challenge it. That design flaw starts before a single question is written.
- Awareness metrics without salience data give you a false read on brand health. A brand can be well-known and still lose market share.
- Qualitative and quantitative research answer different questions. Using one to do the job of the other is a common and costly mistake.
- The most valuable brand research output is not a score. It is a clear picture of the gap between how your brand is positioned and how it is actually experienced.
In This Article
- Why Most Brand Research Produces Comfortable Data, Not Useful Data
- What Brand Research Should Actually Measure
- Qualitative vs Quantitative: Using Each Method for What It Is Actually Good At
- How to Design Brand Research That Challenges Your Assumptions
- The Tracking Study Trap
- Connecting Brand Research to Commercial Outcomes
- When to Commission Brand Research and When to Wait
- What Good Brand Research Looks Like in Practice
Why Most Brand Research Produces Comfortable Data, Not Useful Data
I have sat in a lot of research debrief meetings. The ones that stick with me are not the ones where the data was good. They are the ones where the data was inconvenient, and the room went quiet.
Brand research has a structural problem that nobody talks about enough: the people commissioning it usually have a vested interest in a particular outcome. The brand team wants to show the campaign worked. The CMO wants to justify the budget. The agency wants to demonstrate its positioning work landed. All of that pressure, subtle as it is, shapes the research design before anyone has written a single question.
The result is surveys that lead respondents toward positive associations, focus groups moderated toward consensus, and tracking studies that measure awareness without measuring the things that actually drive revenue. You end up with a deck full of numbers that look like progress and tell you almost nothing about whether your brand is genuinely stronger in the market.
There is a concept in marketing called narcissistic marketing, the tendency to focus on what a brand wants to say rather than what an audience actually needs to hear. Copyblogger wrote about this dynamic clearly in the context of content, but the same failure mode runs through brand research. When you design research around your own assumptions, you are not learning about your audience. You are interviewing yourself.
What Brand Research Should Actually Measure
There are four things brand research needs to measure if it is going to be commercially useful. Most studies cover one or two of them. Very few cover all four with any rigour.
1. Mental Availability
This is the probability that a buyer will think of your brand when they are in a purchase situation. It is not the same as awareness. A brand can have 80% prompted awareness and still fail to come to mind when someone is actually ready to buy. Mental availability is about salience in context, not recognition in a survey.
Good brand research maps the category entry points that matter to your buyers and measures whether your brand is linked to those entry points in memory. That is a very different exercise from asking “have you heard of Brand X?” and recording the yes/no split.
2. Perceived Differentiation
Not whether people think your brand is different in the abstract, but whether they can articulate what makes it different in a way that maps to actual purchase criteria. If your brand is perceived as “professional” and “reliable” and so are your three closest competitors, that is not differentiation. That is category parity.
I have reviewed brand tracking reports where the top-line finding was that the brand scored well on trust and quality. Those are table-stakes attributes in most categories. Scoring well on them tells you that you are not losing on the basics. It does not tell you why someone would choose you over the alternative.
3. Purchase Intent and Its Drivers
Purchase intent as a standalone metric is almost useless. People consistently overstate their likelihood to buy in survey conditions. What matters is understanding which brand attributes are most strongly correlated with intent, and whether your brand owns those attributes in the minds of your target audience.
This is where regression analysis earns its keep. A well-designed brand study will model the relationship between brand perceptions and stated intent, so you can identify which levers actually move the needle rather than optimising for the attributes that score highest in isolation.
4. Competitive Positioning
Your brand does not exist in a vacuum. It exists in a competitive set, and your position in that set is relative, not absolute. Brand research that only measures your own scores without mapping them against the competitive landscape is like running a race and only timing yourself. You might be getting faster while still losing ground.
For more on building the competitive intelligence layer that sits alongside brand research, the Market Research and Competitive Intel hub covers the full range of methods and how they connect.
Qualitative vs Quantitative: Using Each Method for What It Is Actually Good At
One of the most consistent mistakes I have seen in brand research is using quantitative surveys to answer questions that only qualitative methods can address, and vice versa.
Surveys are excellent at measuring the scale and distribution of a perception. They tell you how many people hold a view and how that breaks down across segments. They are poor at explaining why people hold that view, or what the view actually means in practice.
Focus groups and depth interviews are excellent at surfacing the language, associations, and emotional textures that shape brand perception. They are poor at telling you how representative those findings are across a larger population.
The best brand research programmes use both methods in sequence. Qualitative work first, to understand the territory and develop hypotheses. Quantitative work second, to test those hypotheses at scale and measure their distribution. Running them in parallel, or using one without the other, produces research that is either rich but unrepresentative or broad but shallow.
Early in my career, I watched a client commission a large quantitative brand tracker without any prior qualitative work. The survey was built around attributes the marketing team had defined internally. When the results came back, the scores were fine. The problem was that the attributes being measured were not the ones customers actually used to evaluate the category. The research was methodologically sound and strategically useless.
How to Design Brand Research That Challenges Your Assumptions
The design of brand research is where most of the value is either created or destroyed. By the time you are in the field, the important decisions have already been made.
Start with the business question, not the research question. What decision does this research need to inform? If the answer is “we want to understand brand health,” that is not a decision. Push further. Are you deciding whether to reposition? Evaluating whether a campaign shifted perceptions? Assessing whether to enter a new segment? The business question determines the research design. Without it, you are collecting data with no clear destination.
Build in competitive benchmarking from the start. Every key metric should be measured for your brand and your top three to five competitors. This is not optional. Without a competitive baseline, you cannot interpret your own scores in any meaningful way.
Be deliberate about your sample. Nationally representative samples sound rigorous, but they often dilute the signal by including people who are not in your category at all. A sample of category buyers, or better still, category buyers in your target segments, will give you far more actionable data than a general population survey.
Test your questionnaire for leading questions before you go into the field. This sounds obvious. It is surprising how rarely it happens. If a question mentions your brand’s claimed positioning before asking respondents to rate it, you have contaminated the response. A good research partner will catch this. A compliant one will not.
The Tracking Study Trap
Brand tracking studies are one of the most widely used and most consistently misunderstood tools in the research toolkit.
The promise of a tracking study is longitudinal insight: you run the same survey at regular intervals and watch how brand metrics move over time. In theory, this lets you measure the cumulative effect of marketing activity on brand perception.
In practice, tracking studies have several problems that rarely get discussed in the debrief.
First, they are slow. Brand perception changes gradually, and tracking studies typically run quarterly or biannually. By the time a shift shows up in the data, the marketing activity that caused it is months in the past. Attributing the movement to a specific campaign or initiative is often more art than science.
Second, they calcify. Once a tracking study is established, it becomes politically difficult to change the questions, even when those questions are no longer measuring the right things. I have seen organisations continue running the same tracker for five or six years because changing it would break the trend line, even though the market had moved and the original questions were no longer relevant.
Third, they measure claimed behaviour and attitude, not actual behaviour. There is a persistent gap between what people say they think about a brand and what they actually do when they are in a purchase situation. Tracking studies sit entirely on the claimed side of that gap.
None of this means tracking studies are worthless. They are the best tool available for measuring brand perception at scale over time. But they need to be read alongside actual market share data, customer acquisition data, and where possible, behavioural data from digital channels. A brand tracker in isolation is a perspective on reality. It is not reality itself.
Connecting Brand Research to Commercial Outcomes
The most common failure mode in brand research is treating it as a standalone exercise rather than connecting it to commercial performance. Brand perception matters because it influences behaviour, and behaviour drives revenue. If your research programme cannot draw a line between brand metrics and business outcomes, it will always be vulnerable to budget cuts.
When I was running an agency and managing significant media budgets across multiple clients, the question that came up most often in client reviews was not “what does the brand tracker say?” It was “is this working?” The clients who could connect their brand metrics to pipeline, to conversion rates, to customer lifetime value, those were the ones who protected their brand investment through difficult trading periods. The ones who could only point to awareness scores often saw their budgets redirected to performance channels the moment the business came under pressure.
BCG has written about how business model transformation requires connecting strategic positioning to commercial performance metrics. Their analysis of IT services transformation makes a point that applies equally to brand strategy: the organisations that sustain competitive advantage are the ones that can measure the relationship between positioning and performance, not just track one in isolation from the other.
Practically, this means building a measurement framework that sits alongside your brand tracker. At minimum, it should include market share trends, customer acquisition cost by segment, Net Promoter Score or equivalent loyalty indicators, and where available, share of search as a proxy for brand demand. None of these replace brand research. Together, they give it commercial context.
When to Commission Brand Research and When to Wait
Not every brand question requires a formal research programme. Some questions are better answered with existing data. Some require research but not the expensive kind. Knowing the difference saves budget and prevents the paralysis that comes from commissioning research when what you actually need is a decision.
Commission brand research when you are making a significant strategic decision that depends on understanding current perception: a repositioning, a new market entry, a major campaign investment, a brand architecture review. These are decisions where the cost of being wrong is high enough to justify the investment in getting a clearer picture first.
Do not commission brand research as a substitute for a strategic point of view. I have seen organisations use research as a way of deferring decisions that leadership was not ready to make. The research comes back, it is ambiguous (as most research is), and the decision gets deferred again pending further analysis. Research should inform a decision. It should not be used to avoid making one.
There is also a strong case for using lower-cost research methods before committing to a full brand study. Online surveys, social listening analysis, and review mining can give you a directional read on brand perception relatively quickly and cheaply. If those signals are consistent and clear, you may have enough to act on. If they are contradictory or unclear, that is a signal that the more rigorous research is worth the investment.
The same principle applies to testing creative and messaging. Unbounce’s analysis of A/B testing makes clear that testing works best when you have a clear hypothesis and a meaningful sample size. Brand research follows the same logic. Smaller, sharper studies with a clear question will almost always outperform large, broad studies that try to answer everything at once.
What Good Brand Research Looks Like in Practice
I want to be specific here, because this is where most articles on brand research retreat into abstraction.
A well-designed brand research programme for a mid-sized business typically starts with six to eight depth interviews with current customers, lapsed customers, and non-customers in the target segment. The goal is not statistical significance. It is vocabulary. You want to understand the language people actually use to describe the category and the brands in it, including yours.
Those interviews feed into a quantitative survey of between 400 and 800 respondents, segmented by category involvement and purchase recency. The survey measures aided and unaided brand awareness, category entry point associations, attribute ratings for your brand and three to four competitors, purchase intent and its drivers, and net promoter score or equivalent.
The analysis should produce a perceptual map showing where your brand sits relative to competitors on the dimensions that matter most to buyers. It should identify which attributes are driving intent and whether your brand owns those attributes. It should flag the gaps between how you are positioning and how you are perceived.
The output is not a score. It is a strategic brief. What does the brand need to own in the minds of target buyers that it does not currently own? What is the marketing work that needs to happen to close that gap? What will we measure to know if it is working?
If your research debrief does not answer those questions, the research has not done its job. A high awareness score and a positive sentiment rating are not a strategy. They are a starting point.
Brand research sits within a broader intelligence ecosystem. If you are building out your market research capability more broadly, the Market Research and Competitive Intel hub covers the adjacent methods, from competitor analysis to customer insight, that give brand data its full commercial context.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
