Brand Research Is Not Strategy. Here Is What Separates Them.

Brand research and strategy are not the same thing, and confusing them is one of the most expensive mistakes a marketing team can make. Research tells you what exists: what people think, what competitors are doing, what the category looks like right now. Strategy tells you what to do about it. The gap between those two things is where most brand work quietly falls apart.

The discipline of brand research and strategy is really two disciplines that need to be held in tension. Research without strategic interpretation is just data with a slide deck attached. Strategy without research is opinion dressed up as direction. Getting both right, and understanding how one feeds the other, is what separates brand work that changes commercial outcomes from brand work that fills a folder on someone’s desktop.

Key Takeaways

  • Brand research generates inputs. Strategy generates decisions. Treating them as the same process produces neither.
  • The most common research failure is not bad data, it is asking the wrong questions before the methodology is even designed.
  • Competitive research is almost always too shallow. Most teams look at messaging and miss the commercial logic underneath it.
  • A brand strategy is only finished when someone can make a creative or channel decision using it without asking for clarification.
  • Research quality depends on whether the methodology matches the question, not on whether the sample size sounds impressive.

I have been in rooms where an agency presents forty slides of research findings and the client nods along, genuinely believing they now have a brand strategy. They do not. They have a situation analysis. What happens next, the choices about where to compete, what to stand for, and how to behave differently from everyone else, that is the work that actually matters. And it has not started yet.

Why Research and Strategy Get Conflated

Part of the problem is structural. Most agencies and consultancies present research and strategy as a single flowing process, which makes it look like one thing. You do the research, you build the strategy, you present both together. The seam between them is invisible in the deliverable, so clients rarely question whether the strategy actually follows from the research or whether it was written independently and the research was used to justify it after the fact.

I have seen both happen more times than I would like to admit. Sometimes the research genuinely shapes the strategy. Sometimes the strategy was formed in week one based on the team’s instincts, and the research phase was used to find supporting evidence for a direction that was already chosen. The output looks the same. The quality of the strategic thinking is very different.

The other reason they get conflated is that research feels like progress. It produces things: transcripts, data files, charts, a presentation. Strategy produces a document that, if written well, looks deceptively simple. A positioning statement is one paragraph. A tone of voice framework might be two pages. After twelve weeks of research, clients sometimes feel short-changed by how little the strategy looks like on paper. So agencies pad it out, which makes the strategy harder to use and easier to ignore.

For a broader view of how brand strategy fits together as a discipline, the articles on brand positioning and archetypes at The Marketing Juice cover the full range of decisions involved, from competitive mapping to value proposition construction.

What Good Brand Research Actually Looks Like

Good brand research starts with a question, not a methodology. Most briefs I have seen start the other way around: “We want to do focus groups and a survey.” That is a methodology looking for a purpose. Before you decide how to gather information, you need to be precise about what you are trying to learn and why you cannot already answer it.

The questions that actually matter in brand research tend to cluster around three areas. First, perception: what do people currently think about this brand, and how does that differ across audiences and contexts? Second, category: what are the unwritten rules of this space, what do people expect, and where are those expectations being underserved? Third, competitive: what are other brands doing, what is working, and what space is genuinely available?

Each of those questions requires a different research approach. Perception work often needs qualitative depth, conversations that allow people to explain their mental models rather than just rate attributes on a scale. Category work benefits from ethnographic observation or diary studies, understanding how people actually behave rather than how they say they behave. Competitive work is largely desk-based, but it needs to go deeper than reading competitors’ websites and noting their taglines.

I spent years being skeptical of survey data, not because surveys are inherently unreliable but because the ones I kept seeing were designed to confirm rather than challenge. The questions were leading, the sample sizes were too small to be meaningful, and the differences between response options were treated as statistically significant when they were not. The methodology has to match the question. If it does not, the data is worse than useless because it creates false confidence.

When I am evaluating research, the first thing I look at is not the findings, it is the methodology. How were participants recruited? What was the question design? Were the differences between groups large enough to act on, or are we looking at noise that got turned into a narrative? BCG’s work on customer experience and brand strategy is a useful reference point here: the gap between what brands believe about their customer experience and what customers actually report is consistently larger than most marketing teams expect. That gap only becomes visible if the research is designed to find it.

The Competitive Research Problem

Competitive research is where brand work tends to be laziest. Most teams look at what competitors say: their taglines, their website copy, their social presence, their campaign themes. That is useful as far as it goes, but it does not tell you why they made those choices or whether those choices are working.

The more useful question is: what is the commercial logic underneath the positioning? A brand that leads on sustainability messaging is not just making a values statement. They are making a bet that sustainability is a meaningful purchase driver in their category, that they can credibly own it, and that it will hold up under scrutiny. Understanding that logic tells you whether the positioning is a genuine strategic choice or a bandwagon move. Those two things look identical on the surface and are very different in terms of durability.

When I was running an agency and we were pitching in new categories, I would always push the team to go beyond the obvious competitive set. If you are a challenger brand in a category, your real competition is not just the other brands in the category. It is the mental shortcut people use when they are not paying attention, which is usually the market leader or the default option. Understanding how that shortcut works, and what would have to be true for someone to override it, is more strategically useful than knowing what your direct competitors said in their last campaign.

BCG’s research on the world’s strongest brands points to something that competitive analysis often misses: the brands that sustain value over time are not necessarily the ones with the most distinctive positioning at any given moment. They are the ones that maintain consistency while adapting execution. That is a strategic insight that pure message analysis would not surface.

How Research Should Feed Strategy

The translation from research to strategy is not automatic. It requires judgment, and judgment is where most processes break down. The research produces findings. Someone has to decide which findings are strategically significant and which are interesting but irrelevant. That decision is not made by the data. It is made by a person with a point of view about what the business is trying to achieve.

This is why brand strategy has to start with a business problem, not a brand brief. If you do not know what commercial outcome the strategy needs to drive, you cannot make good judgment calls about which research findings matter. Everything looks equally relevant, and the strategy ends up trying to respond to too many things at once.

The process I have seen work consistently is this: define the business problem first, then design the research to answer the questions that are relevant to that problem, then use the findings to make choices rather than to describe the landscape. The strategy is the choices. It is not the description.

One of the most common failure modes I saw when judging the Effie Awards was strategies that described the audience rather than making a choice about them. “Our audience is busy professionals aged 25-45 who value quality and authenticity.” That is not a strategic insight. That is a demographic sketch. The strategic version would be: “Our audience has been let down by brands in this category that promise quality and deliver inconsistency, and they have defaulted to price as the only reliable signal. We are going to give them a different signal.” That is a choice. It implies a direction. It rules things out.

The Role of Qualitative Research in Brand Work

Qualitative research is underused in brand strategy, partly because it is harder to present. You cannot put a percentage next to a theme from a focus group. You cannot say “67% of participants feel the brand is trustworthy.” What you can say is: “When we asked people to describe the brand, the language they used was consistently about reliability in low-stakes situations, but no one mentioned it in the context of decisions that really mattered to them.” That is a more useful insight, but it takes more work to turn into a slide.

Qualitative research is also better at surfacing what people do not say. The hesitation before an answer. The moment when someone describes a competitor with more warmth than they used for the brand being studied. The way people reach for metaphors when they are trying to explain something they have not consciously articulated before. None of that shows up in a survey. All of it can be strategically significant.

The limitation of qualitative research is that it cannot tell you how widespread something is. A theme that emerged strongly in eight interviews might reflect a genuine pattern in the broader audience or it might be a quirk of who was recruited. That is why qualitative and quantitative research are most useful when they are designed to work together: qualitative to generate hypotheses and surface nuance, quantitative to test whether those hypotheses hold at scale.

Brand awareness measurement is a useful bridge between the two. Semrush’s overview of brand awareness measurement covers the practical mechanics of tracking how brand perception shifts over time, which is valuable context when you are designing research that needs to establish a baseline before strategy is implemented.

When the Research Contradicts the Strategy You Expected

This is the moment that separates good strategic process from bad. If the research comes back and tells you something you did not expect, what happens next? In a healthy process, the unexpected finding gets examined carefully. Is it methodologically sound? Is it consistent across different parts of the research? Does it change the strategic direction, or does it change the execution within a direction that still holds?

In an unhealthy process, the unexpected finding gets explained away. “That was probably an outlier in the sample.” “The participants in that group were not representative.” “The client had already committed to this direction before we started.” I have been in both types of rooms. The second type is more common than anyone in the industry would publicly admit.

The most useful research I have ever seen in a brand context was research that made the client uncomfortable. Not because discomfort is inherently valuable, but because it meant the research had found something real that the business had not been willing to look at directly. One client I worked with was convinced their brand was seen as innovative. The research showed they were seen as reliable and slightly boring, which in their category was actually a stronger commercial position than innovative. The strategy we built leaned into reliability rather than fighting the perception. It worked. But it required the client to let go of a story they had been telling themselves for years.

Brand loyalty is another area where research regularly challenges assumptions. MarketingProfs data on brand loyalty illustrates how quickly loyalty patterns shift under economic pressure, which has direct implications for how you interpret brand perception research conducted at a single point in time.

Making the Strategy Testable

A brand strategy that cannot be tested is not a strategy, it is a belief system. One of the things I push for in every strategy process is defining, before the strategy goes into execution, what would tell you it is working and what would tell you it is not.

This sounds obvious. It almost never happens. Most brand strategies are presented and approved without any agreement on what success looks like beyond vague references to awareness and sentiment. When the campaign runs and the results are mixed, there is no shared framework for interpreting what happened. Everyone picks the metrics that support their preferred narrative.

The metrics that matter will depend on the business problem the strategy was built to solve. If the problem is that the brand is not being considered in the purchase decision, then consideration metrics matter more than awareness metrics. If the problem is that people consider the brand but do not choose it, then conversion and preference data are more relevant. Sprout Social’s brand awareness tools offer one lens on tracking visibility, but visibility is only meaningful if it is connected to the commercial outcome you are trying to move.

The other thing worth building into the strategy upfront is a view on what could undermine it. Brand strategies fail for predictable reasons: inconsistent execution, internal resistance, competitive response, or a shift in the category that makes the positioning less relevant. If you can name those risks at the start, you can design the strategy to be more resilient to them. That is not pessimism. That is how strategy is supposed to work.

The Consistency Problem in Execution

Research and strategy are only as valuable as the execution they produce. A brand strategy that is well-researched and clearly written but inconsistently applied across touchpoints is not working. The consistency problem is more common than the strategy problem in most organisations.

HubSpot’s analysis of brand voice consistency covers the practical mechanics of maintaining coherence across channels and teams, which becomes increasingly difficult as organisations grow and more people are involved in producing brand communications. The risk of brand dilution through inconsistent application is real, and it is something that Moz has examined specifically in the context of AI-generated content and brand equity, where the volume of content being produced can outpace the governance structures designed to keep it on-brand.

When I was growing an agency from a small team to close to a hundred people across multiple nationalities, the consistency challenge was constant. The brand of the agency, what we stood for and how we presented ourselves, had to be coherent across a team that was adding new people every few months. The answer was not more rules. It was clearer principles and better examples of what good looked like in practice. Rules get worked around. Principles, if they are genuinely understood, get applied.

MarketingProfs on building flexible, durable brand identity toolkits makes a similar point about visual coherence: the goal is not rigidity but a system that holds together under the pressure of real-world use. That applies equally to verbal identity and strategic positioning.

The full body of thinking on how brand strategy connects to positioning, architecture, and competitive differentiation is covered across the brand strategy section of The Marketing Juice, which is worth working through if you are building or rebuilding a brand from the ground up.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between brand research and brand strategy?
Brand research generates information about perceptions, audiences, and competitors. Brand strategy is the set of choices made using that information: where to compete, what to stand for, and how to behave differently from alternatives. Research without strategy is a situation analysis. Strategy without research is opinion. The two need to be held in sequence, with clear translation between them.
How do you know if brand research is good enough to build strategy on?
Start with the methodology, not the findings. Ask whether the research was designed to confirm a hypothesis or to challenge it. Check whether the sample was recruited in a way that reflects the actual audience. Assess whether the differences between groups are large enough to be meaningful or whether they are within normal variation. Good research makes uncomfortable findings visible rather than smoothing them out.
How much research is needed before writing a brand strategy?
Enough to answer the questions that the business problem requires, and no more. Research scope should be determined by the decisions that need to be made, not by what is standard practice or what fills a timeline. Over-researching delays strategy and creates the illusion of certainty that extensive data can sometimes produce. The goal is sufficient insight to make defensible choices, not exhaustive coverage of everything knowable about the category.
What should happen when research contradicts the expected strategic direction?
The finding should be examined carefully before being accepted or dismissed. Is it methodologically sound? Is it consistent across multiple parts of the research? Does it change the strategic direction or just the execution within it? If it holds up under scrutiny, the strategy should follow the evidence rather than the prior expectation. Strategies built on wishful thinking about brand perception fail in execution because the audience does not share the premise.
How do you measure whether a brand strategy is working?
By agreeing on the metrics before the strategy goes into execution, not after. The relevant metrics depend on the business problem the strategy was built to solve. If the problem is low consideration, track consideration. If the problem is low conversion from consideration, track preference and conversion. Awareness metrics are a proxy, not an outcome. The strategy should specify what would count as evidence that it is working and what would count as evidence that it needs to change.

Similar Posts