Strategic Market Research: Stop Researching and Start Deciding
Strategic market research is the process of gathering and interpreting information about customers, competitors, and market conditions to inform decisions that move a business forward. It is not a reporting exercise. It is not a box to tick before a campaign goes live. Done properly, it changes what you build, what you say, and who you target.
Most organisations do some version of it. Far fewer do it well. The gap is not usually budget or access to data. It is the discipline to connect research outputs to actual decisions, and the willingness to act on what the research tells you even when it is inconvenient.
Key Takeaways
- Strategic market research only has value when it is connected to a specific decision. Research without a decision owner is decoration.
- The most common failure is commissioning research to validate a position already taken, rather than to genuinely inform one.
- Primary and secondary research serve different purposes. Mixing them up wastes time and produces answers to questions nobody asked.
- Segmentation research is only useful if the segments are actionable. Demographic clusters with no behavioural insight cannot drive targeting decisions.
- The organisations that use research well treat it as a continuous input, not a periodic project. Markets do not pause between your research cycles.
In This Article
- Why Most Market Research Does Not Change Anything
- What Makes Research Strategic Rather Than Descriptive
- The Right Mix of Primary and Secondary Research
- Segmentation Research That Actually Drives Decisions
- How to Brief a Market Research Project Properly
- Turning Research Into Strategy: The Gap Most Teams Cannot Cross
- Making Research a Continuous Function, Not a Periodic Project
Why Most Market Research Does Not Change Anything
I have sat in more research readouts than I can count. Thick decks, well-designed slides, careful analysis. And then nothing changes. The campaign brief that was already written gets approved. The positioning that was already decided gets confirmed. The research was commissioned to provide cover, not direction.
This is the structural problem with how most organisations approach market research. It is treated as a project with a deliverable rather than a function with a purpose. Someone commissions a piece of work, a supplier delivers a report, the report gets presented, and then it sits in a shared drive until someone needs a statistic for a pitch deck.
The organisations that get genuine value from research do something different. They start with the decision, not the brief. Before commissioning a single survey or focus group, they ask: what are we going to do differently depending on what we find? If the answer is “nothing, we just want to understand the market better,” that is not strategic research. That is curiosity with a budget attached.
When I was running agencies, I saw this pattern repeatedly on the client side. A brand would spend six figures on a segmentation study, produce beautifully named customer personas, and then brief their media agency to target 25 to 54 year olds anyway. The research existed. It just never touched the actual work.
If you want to explore the broader landscape of research methods and competitive intelligence tools, the Market Research and Competitive Intel hub covers the full range, from search intelligence to behavioural analytics to audience segmentation approaches.
What Makes Research Strategic Rather Than Descriptive
Descriptive research tells you what is happening. Strategic research tells you what to do about it. Both have value, but they are not the same thing, and conflating them is where most research programmes go wrong.
A brand awareness tracker tells you that 42% of your target audience recognises your brand. That is descriptive. Strategic research would tell you which segments have low awareness, what is driving that gap, whether closing it is worth the investment, and what message is most likely to shift it. The first number is interesting. The second set of answers is actionable.
The distinction matters because descriptive research is much easier to commission and much easier to present. It produces clean numbers that look authoritative. Strategic research tends to produce messier outputs: competing hypotheses, conditional findings, recommendations that require judgment to apply. It is harder to package and harder to sell internally, which is partly why organisations default to the descriptive version.
Strategic research also requires a clearer brief. You cannot commission strategic research without knowing what decisions it needs to inform. That sounds obvious, but it rules out a significant proportion of the research that gets commissioned every year. If your brief to a research agency is “we want to understand the market,” you will get a market overview. If your brief is “we need to decide whether to enter the SME segment in Q3 and what our pricing position should be,” you will get something you can actually use.
The Right Mix of Primary and Secondary Research
Primary research is data you collect yourself: surveys, interviews, focus groups, ethnographic observation, usability testing. Secondary research is data that already exists: published reports, industry data, competitor filings, platform analytics, academic literature.
Both are legitimate. Both have blind spots. The mistake is treating one as a substitute for the other.
Secondary research is faster and cheaper, but it answers the questions someone else thought to ask. It reflects the market as it was when the data was collected, which may or may not be the market you are operating in now. Industry reports from research houses like Forrester, whose work on marketing organisational models is a useful reference point for structural questions, are valuable for context and benchmarking. They are less useful for the specific, granular questions that drive individual business decisions.
Primary research gives you answers to your specific questions, but it is only as good as the questions you ask and the sample you ask them of. A survey of 200 existing customers will tell you a great deal about people who already buy from you. It will tell you very little about people who considered you and chose a competitor, or people who have never heard of you at all.
The most useful research programmes combine both. Secondary research to frame the market, identify the right questions, and avoid reinventing wheels that already exist. Primary research to get specific, current, decision-relevant answers to the questions secondary data cannot answer.
One practical approach I have used is to run a secondary research sprint before commissioning any primary work. Two weeks of structured desk research, covering market sizing, competitor positioning, category trends, and any existing customer data the business already holds. That sprint usually surfaces three or four genuinely open questions that primary research can answer, rather than commissioning a large study to answer questions that are already answered elsewhere.
Segmentation Research That Actually Drives Decisions
Segmentation is one of the most commissioned and least used outputs in marketing research. Every brand of a certain size has a segmentation model. Most of them are gathering dust.
The reason is usually that the segments were built for analytical elegance rather than operational usefulness. A segmentation study that produces six beautifully differentiated customer archetypes is impressive research. But if your media buying platform cannot target those archetypes, if your sales team cannot identify which segment a prospect belongs to, and if your product team cannot map features to segment needs, then the segmentation has no operational value regardless of its analytical quality.
BCG’s work on demand-centric growth makes a related point about the gap between how brands think about customers and how customers actually make decisions. The implication for segmentation research is that segments built around brand perception or demographic profiles often miss the behavioural and attitudinal drivers that actually explain purchase decisions.
Useful segmentation research has four characteristics. The segments are meaningfully different from each other in ways that affect how you would approach them. They are identifiable in the real world, meaning you can actually find and target them through the channels you use. They are large enough to be worth addressing separately. And they are stable enough that the segmentation remains valid for long enough to act on.
When I was involved in a major retail client’s planning process, we commissioned a segmentation that met all four criteria. The output was less elegant than the previous study, with three segments rather than six, but each one mapped directly to a distinct media strategy, a distinct product range, and a distinct pricing approach. The research drove decisions. That is the test.
How to Brief a Market Research Project Properly
A bad brief produces bad research. This is not a criticism of research agencies. It is a structural problem. Research suppliers can only answer the questions they are asked. If the brief is vague, the research will be broad. If the brief conflates multiple objectives, the research will try to serve all of them and serve none of them well.
A well-constructed research brief has six components. First, the business context: what is the organisation trying to achieve, and what decisions are being made in the next three to six months? Second, the research objectives: what specific questions need to be answered? Third, the decision owners: who will act on the findings, and what authority do they have? Fourth, the known constraints: what do you already know, and what hypotheses are you testing? Fifth, the methodology preferences or requirements: are there reasons to prefer qualitative over quantitative, or vice versa? Sixth, the timeline and budget, stated honestly.
The section that most briefs miss entirely is the decision owners question. Research without a named decision owner is research without a purpose. Someone needs to be accountable for taking the findings and doing something with them. Without that accountability, research becomes an end in itself rather than a means to something.
I have seen this play out in both directions. At one agency, we received a brief from a client that named the CMO as the decision owner for a positioning study. The CMO attended the readout, asked sharp questions, and within three weeks had approved a new brand architecture based directly on the findings. At another client, the brief named no decision owner, the research was presented to a cross-functional group with no clear authority, and the findings were debated for months without resolution. Same quality of research. Completely different outcomes.
Turning Research Into Strategy: The Gap Most Teams Cannot Cross
The translation from research insight to strategic recommendation is where most programmes break down. It requires a different skill set from the one that produces good research. Analysis is not strategy. A finding is not a recommendation. A recommendation is not a plan.
The organisations that close this gap tend to do two things differently. First, they involve the people who will act on the research in the research process itself, not just the readout. When a strategist or a product manager has been involved in designing the research questions and observing fieldwork, they arrive at the readout with context that makes the findings more actionable. Second, they build a structured synthesis step into the research process, separate from the analysis. Analysis answers “what did we find?” Synthesis answers “what does this mean for us?” These are different questions and they require different thinking.
Tools like collaborative experimentation platforms can help teams move from insight to action faster, particularly when research findings need to be tested against live customer behaviour before being embedded in strategy. The principle is the same whether you are running a controlled experiment or applying qualitative research: the insight needs to be tested against reality before it is treated as settled.
Behavioural data can also serve as a useful reality check on what research participants say they do versus what they actually do. Conversion surveys, like those available through Hotjar’s conversion survey tools, can surface the gap between stated motivation and revealed preference in ways that traditional research methods often miss. I have seen focus groups produce confident consensus around a message that performed poorly in market. The participants were not lying. They were rationalising. Behavioural data does not have that problem.
The broader point is that strategic market research is not a single method. It is a discipline that draws on multiple sources, tests hypotheses against evidence, and connects findings to decisions. The methods are tools. The discipline is the thing.
Making Research a Continuous Function, Not a Periodic Project
The project model of market research has a fundamental flaw: markets do not pause between your research cycles. By the time a large-scale research programme is commissioned, fielded, analysed, and presented, the market conditions it was designed to understand may have shifted. This is not an argument against large-scale research. It is an argument for building continuous intelligence alongside it.
Continuous research does not mean running surveys every week. It means building a set of lightweight, always-on inputs that give you a current read on the questions that matter most. Customer satisfaction data, search trend analysis, social listening, sales team feedback, and periodic qualitative interviews with customers and non-customers can all serve this function at relatively low cost.
BCG’s research on retail reinvention identified the ability to sense and respond to market shifts quickly as one of the distinguishing characteristics of category leaders. That capability is not built through periodic research projects. It is built through continuous intelligence that is embedded in how the business makes decisions, not bolted on as an occasional input.
The practical implication is that research budgets should probably be allocated differently than they currently are in most organisations. Less on large, infrequent studies commissioned to answer questions that have already been partially answered by existing data. More on building the infrastructure for continuous intelligence: the tools, the processes, and the people who can synthesise ongoing inputs into decision-relevant insight.
When I grew an agency from 20 to just under 100 people, one of the things I invested in early was a structured approach to market intelligence, not a research department, but a rhythm: monthly reviews of competitor activity, quarterly customer interviews, and a standing agenda item in leadership meetings for market signals. It was not expensive. It meant we were rarely surprised by things we should have seen coming.
For more on how research methods, tools, and competitive intelligence fit together as a coherent programme, the Market Research and Competitive Intel hub covers the full range of approaches, from primary research design to the tools that support ongoing market monitoring.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
