Marketing Strategy Research: What Most Teams Skip
Marketing strategy research is the process of gathering and analysing information about your market, customers, competitors, and commercial environment before committing to a strategic direction. Done well, it reduces the risk of building campaigns on assumptions that sound plausible but turn out to be wrong. Done poorly, or skipped entirely, it leaves you optimising execution against a strategy that was never grounded in reality.
Most teams understand this in principle. Fewer do it with the rigour it deserves. And a surprising number confuse research with the outputs they already have: a CRM full of existing customer data, a brand tracker that hasn’t been refreshed in two years, or a competitor analysis that someone built in a slide deck and no one has revisited since.
Key Takeaways
- Most marketing strategy research fails not because teams lack data, but because they only study people who already know them, missing the audiences that would actually drive growth.
- Separating customer understanding from market sizing is a critical discipline. They answer different questions and require different methods.
- Competitor research is routinely over-indexed. Understanding why customers don’t buy from you is more strategically valuable than cataloguing what competitors are doing.
- Research should be connected to a specific decision. Research without a decision attached to it is expensive curiosity, not strategy.
- The goal of research is honest approximation, not false precision. A directionally correct insight acted on quickly outperforms a perfectly measured one that arrives too late.
In This Article
- Why Most Marketing Research Misses the Point
- What Are the Core Types of Marketing Strategy Research?
- How to Connect Research to a Specific Strategic Decision
- The Qualitative vs Quantitative Balance
- Common Research Mistakes That Distort Strategy
- How Research Should Feed Into Strategy Development
- A Note on Research and Business Fundamentals
Why Most Marketing Research Misses the Point
I spent a significant portion of my early career in performance marketing, and I was guilty of this myself. We had access to enormous amounts of data about people who had already purchased, already clicked, already converted. The measurement was clean. The attribution looked tight. And we kept optimising the same pool of intent, believing we were doing sophisticated marketing work.
What I eventually came to understand is that most of what performance marketing captures, it didn’t create. The demand was already there. We were standing at the bottom of the funnel with a very good net, but we weren’t doing anything to fill the river upstream. The research we were doing, customer surveys, conversion analysis, CRM segmentation, was almost entirely focused on people who had already decided they wanted what we were selling. It told us nothing about the much larger population who hadn’t.
That distinction matters enormously when you’re trying to build a marketing strategy, because the research questions that apply to existing customers are completely different from the ones that apply to prospective new audiences. Conflating the two produces strategies that are very good at retaining and converting warm demand, and largely blind to the growth opportunity sitting outside it.
If you’re thinking about how research connects to your broader go-to-market approach, the Go-To-Market and Growth Strategy hub covers the commercial frameworks that sit around these decisions in more depth.
What Are the Core Types of Marketing Strategy Research?
There are four research categories that genuinely matter for strategy. Most teams have some version of each, but they’re often underdeveloped, outdated, or disconnected from the decisions they’re supposed to inform.
Market Research
Market research answers the question: how large is the opportunity, and how is it structured? This includes total addressable market sizing, category growth trends, channel dynamics, and the economic environment your buyers operate in. It’s the research that tells you whether you’re fishing in a growing pond or a shrinking one, and whether the segment you’re targeting is worth the resources you’re planning to put behind it.
This is often the most neglected category, partly because it feels abstract compared to customer data, and partly because it requires going beyond your own analytics. Organisations like Forrester and BCG have both written extensively about how go-to-market strategies fail when they’re built on market assumptions that were never properly tested. The pattern is consistent: teams assume their category is growing because their own revenue is growing, then get caught out when the category contracts and takes them with it.
Customer Research
Customer research answers a different set of questions: who buys from you, why they chose you, what they value, what problems you solve for them, and what would make them leave. This is the category most teams are strongest in, but even here there are systematic gaps.
The most common gap is that customer research only studies customers. That sounds obvious, but the implication is significant. Your customers are a self-selected group. They already found you, already trusted you enough to buy, and already decided you were the right fit. Studying only them tells you what you’re good at with people who chose you. It tells you almost nothing about the people who didn’t.
When I was running an agency that had grown from around 20 people to close to 100, one of the most useful research exercises we did wasn’t a client satisfaction survey. It was a structured set of conversations with companies who had considered us and gone elsewhere. The reasons were sometimes operational, sometimes commercial, sometimes about perception. But they were consistently things we couldn’t have identified from our own client base, because our clients, by definition, hadn’t been put off by them.
Competitor Research
Competitor research is the category most teams over-invest in relative to the strategic value it returns. There’s something psychologically satisfying about building a competitor matrix, tracking their campaigns, monitoring their pricing, and cataloguing their positioning. It feels like intelligence work. It often isn’t.
The problem is that competitor research is almost entirely backward-looking. It tells you what competitors have already done, not what they’re about to do. And it creates a pull toward imitation, because when you spend a lot of time studying what competitors are doing well, the natural response is to do something similar. That’s a race to the middle.
Useful competitor research is narrower and more specific. It focuses on identifying where competitors are weak, what customer needs they’re not meeting, and what market positions are genuinely unclaimed. That’s a different exercise from cataloguing their ad creative and updating a feature comparison table every quarter.
Audience and Demand Research
This is the research category that most directly addresses the growth gap I described earlier. Audience and demand research looks at people who aren’t yet customers: who they are, what problems they’re trying to solve, how they currently think about the category, what language they use, and what it would take for them to consider you.
This type of research is harder to do well because you can’t pull it from your own systems. It requires primary research, search behaviour analysis, social listening, or structured interviews with people outside your existing customer base. Tools like SEMrush’s analysis of how demand patterns emerge illustrate how search data can be a useful proxy for understanding what audiences are actively trying to solve, before they’ve made any brand decisions at all.
How to Connect Research to a Specific Strategic Decision
The most important discipline in marketing strategy research isn’t methodological. It’s knowing what decision the research is supposed to inform before you commission it.
I’ve sat in enough briefing meetings to know that research is often commissioned for the wrong reasons. Sometimes it’s to validate a decision that’s already been made. Sometimes it’s to delay a decision that someone doesn’t want to own. Sometimes it’s just because “we should know more about our customers” sounds like a reasonable thing to say, without any particular question attached to it.
Research without a decision attached to it is expensive curiosity. It generates interesting findings that get presented, discussed, filed, and forgotten. The teams that use research well start from the other end: they identify the specific strategic question they’re trying to answer, then work backward to determine what information would actually change their answer, then design the minimum research required to get that information.
A useful framing is to ask: if this research came back with a result we didn’t expect, what would we do differently? If the honest answer is “probably nothing”, the research isn’t connected to a real decision and isn’t worth doing in its current form.
The Qualitative vs Quantitative Balance
There’s a persistent tendency in data-driven marketing environments to distrust qualitative research. It’s seen as anecdotal, subjective, not statistically significant. Quantitative data, by contrast, feels rigorous. You can put it in a chart. You can track it over time. You can defend it in a boardroom.
The problem is that quantitative data tells you what is happening. Qualitative research tells you why. And for strategy, the why is almost always more useful.
I’ve seen this play out repeatedly when reviewing creative strategies. A brand tracker might tell you that consideration has dropped among a particular demographic. That’s the what. But until you’ve actually talked to people in that demographic, you don’t know whether it’s because your messaging isn’t reaching them, because a competitor has become more relevant, because there’s a perception problem you haven’t identified, or because the category itself is declining in that group. Each of those requires a completely different strategic response. Quantitative data alone won’t tell you which one it is.
The most effective research programmes use both. Qualitative research to generate hypotheses and understand the human texture of a problem. Quantitative research to test those hypotheses at scale and understand the size of what you’re dealing with.
Common Research Mistakes That Distort Strategy
After two decades of working with brands across more than 30 industries, the research failures I’ve seen most often come down to a handful of recurring patterns.
Studying the wrong population. As I described earlier, research that only looks at existing customers creates a systematically distorted picture of the market. Growth requires understanding people who haven’t chosen you yet, and that requires going outside your own data.
Treating old research as current. Markets move. Customer expectations shift. A brand positioning study from three years ago may have been excellent work at the time, but if the competitive landscape has changed or the category has evolved, it’s a historical document, not a strategic asset. I’ve seen teams make significant investment decisions based on research that was essentially describing a market that no longer existed.
Confusing satisfaction with advocacy. Customer satisfaction research is valuable, but it has a ceiling. A customer who gives you a seven out of ten is satisfied. They are not enthusiastic. They are not a growth driver. The distinction between satisfied customers and genuinely delighted ones matters commercially, because only the latter group actively brings you new customers. If your research programme is optimised around satisfaction scores, you may be measuring the floor rather than the ceiling.
Over-relying on self-reported behaviour. What people say they do and what they actually do are frequently different. This is well-documented in behavioural economics and it’s a practical problem for anyone running surveys or focus groups. People are aspirational about their own decision-making. They describe a rational, considered process that often bears little resemblance to how they actually chose. Behavioural data, where you can get it, is more reliable than self-reported data for understanding actual purchase behaviour. Tools like Hotjar’s feedback and behaviour analysis approaches and similar platforms exist precisely to bridge that gap between what users say and what they do.
Mistaking precision for accuracy. A survey with 2,000 respondents and a 95% confidence interval feels rigorous. But if the sample was drawn from your email list, it’s still only telling you about people who already know you. Statistical precision applied to a biased sample produces confidently wrong answers. This is one of the reasons I’m cautious about the false certainty that large datasets can create. More data is not the same as better data.
How Research Should Feed Into Strategy Development
Research should narrow uncertainty, not eliminate it. That’s an important distinction, because one of the ways research gets misused is as a substitute for judgment. Teams commission more and more research in the hope that at some point the data will make the decision for them. It won’t. At some point, someone has to make a call about what the research means and what to do about it.
The role of research in strategy development is to reduce the number of plausible options, sharpen the questions that remain genuinely open, and give decision-makers a more honest picture of the environment they’re operating in. It’s not to produce a single correct answer. Markets are too complex and too dynamic for that.
One framework I’ve found useful is to structure research outputs around three categories: things we now know with reasonable confidence, things we have a directional view on but would want to validate, and things that remain genuinely uncertain and would require more investigation to resolve. That structure forces honesty about what the research actually established versus what it suggested, and it makes it easier to have a productive conversation about where to invest further versus where to proceed on best current judgement.
The BCG work on go-to-market strategy and commercial transformation makes a related point about how commercial decisions made without adequate market intelligence tend to cluster around the same safe choices as competitors, because everyone is working from the same visible signals. Differentiation in strategy often comes from research that goes deeper or wider than what’s conventionally available.
There’s also a timing dimension that gets underappreciated. Research that arrives after a strategic decision has been made is useful for learning but not for deciding. The most commercially valuable research is done early enough to genuinely influence the direction, not late enough to only confirm it.
For teams building out a broader growth strategy, the Go-To-Market and Growth Strategy hub covers how research connects to positioning, channel strategy, and commercial planning across the full go-to-market cycle.
A Note on Research and Business Fundamentals
One thing I’ve observed across a lot of businesses, including some I’ve been brought in to help turn around, is that marketing research sometimes functions as a way to avoid a harder conversation about the product or the business model.
If customers aren’t buying, or aren’t staying, or aren’t recommending you to others, the research question isn’t always a marketing one. Sometimes it’s a product question. Sometimes it’s a pricing question. Sometimes it’s a service delivery question. Marketing is often asked to paper over those cracks, and research gets commissioned to find a messaging solution to what is actually an experience problem.
The most effective marketing I’ve seen over my career has been built on top of businesses that genuinely deliver something worth talking about. When that foundation is solid, research helps you find the right audiences and the right language to reach them. When it isn’t, research helps you understand the gap between what you’re promising and what you’re delivering, which is useful, but the solution isn’t a better campaign. It’s a better product.
That’s not a comfortable thing to say in a marketing context, but it’s the honest one. Research that surfaces a product or experience problem is doing its job. The question is whether the organisation is willing to act on what it finds, or whether it will commission more research instead.
Platforms like Vidyard’s research into go-to-market pipeline gaps and CrazyEgg’s analysis of growth approaches both point toward the same underlying truth: the teams that grow most effectively are those who are honest about where their current model is working and where it isn’t, and who use research to sharpen that picture rather than obscure it.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
