Market Research Only Works If It Changes Something

Market research earns its place in strategic planning when it changes a decision. Not when it validates one that has already been made, and not when it fills a slide deck with numbers that get nodded at and forgotten. The role of market research in strategy is to reduce the cost of being wrong, and that only happens when the findings are connected to something consequential.

Most organisations collect more research than they act on. The gap between insight and action is where strategy quietly falls apart.

Key Takeaways

  • Research that does not change a decision was not worth commissioning. The question to ask before any study is: what would we do differently if the findings surprised us?
  • Strategic planning without research is informed guesswork. Research without a strategic question is expensive filing.
  • The most common research failure is not methodological. It is organisational: findings presented too late, to the wrong people, in the wrong format to influence anything.
  • Secondary research sets the context. Primary research tests whether that context applies to your specific market, customer, or moment.
  • Research quality is not measured by sample size or methodology. It is measured by whether the strategy it informed performed better than it would have without it.

What Does Market Research Actually Do for Strategy?

Strategic planning is fundamentally a process of making choices under uncertainty. Which market to enter. Which segment to prioritise. Which proposition to lead with. Which price point to defend. Research does not eliminate that uncertainty, but it changes the quality of the judgment call. It replaces assumption with evidence, and supposition with something you can interrogate.

I have sat in enough planning sessions to know what happens when research is absent. The loudest voice wins. The most senior person’s instinct becomes the strategy. The team spends three months building something the market did not want, and then another three months wondering why it did not land. This is not a hypothetical. I watched it happen at agencies, at clients, and at organisations that should have known better.

Research does four things for strategy when it is used properly. It identifies where genuine opportunity exists, not just where the business wishes it existed. It surfaces the assumptions that are most likely to be wrong. It gives leadership a shared factual basis for disagreement, which is far more productive than competing opinions. And it creates a baseline against which performance can actually be measured.

None of those things happen automatically. They require research to be commissioned with a specific strategic question in mind, conducted rigorously, and then presented to the people who control the decisions, at the point when those decisions are still open.

If you are building out your product marketing capability more broadly, the Product Marketing hub at The Marketing Juice covers the full strategic landscape, from positioning and pricing through to launch planning and competitive intelligence.

Why Research Gets Disconnected from Strategy

The disconnect is usually structural rather than intentional. Research teams sit in one part of the business. Strategy teams sit in another. The research gets commissioned, conducted, and delivered as a report. The strategy gets built in parallel, by people who may have glanced at the executive summary but did not attend the debrief. By the time the findings are formally shared, the strategic direction is already set.

I have seen this pattern repeat across industries. At one agency where I was running the planning function, a client commissioned a substantial piece of customer segmentation research. It took four months. By the time the findings landed, the campaign brief had already been written, the creative had been developed, and the media plan was in market. The research sat in a shared drive and was referenced in exactly one presentation, as a retrospective justification for decisions that had already been made.

The research was not bad. The timing was. And timing in strategic planning is almost everything.

There is also a subtler problem: research commissioned to confirm rather than to challenge. When a leadership team has already decided on a direction and commissions research to support it, the framing of the questions, the selection of the methodology, and the interpretation of the findings all bend towards validation. This is not always conscious, but it is common. And it produces the worst possible outcome: confident bad strategy.

The Strategic Questions That Research Should Answer

Good research starts with a precise question, not a broad topic. “Tell us about our customers” is a topic. “Which customer segments are most likely to increase spend in the next 12 months, and what would need to change about our proposition to make that happen?” is a question. The difference in usefulness between those two starting points is enormous.

In strategic planning, the questions that research can most usefully answer tend to cluster around five areas.

Market sizing and structure: how large is the addressable opportunity, how is it segmented, and where is it growing or contracting? This is foundational for resource allocation and prioritisation. Without it, you are guessing at where to invest.

Customer understanding: who buys, why they buy, what drives the decision, and what would make them switch. This is the territory that buyer persona research attempts to map, though the quality of execution varies enormously. A persona built from demographic data is a demographic. A persona built from behavioural and attitudinal research is a tool you can actually use.

Competitive landscape: where competitors are strong, where they are exposed, and where there is genuine white space. Competitive analysis done properly goes well beyond tracking share of voice or monitoring ad spend. It maps strategic intent and identifies the positions that are contested versus the ones that are available.

Proposition and positioning: whether the intended positioning is credible, differentiated, and relevant to the segments that matter. This is where research connects most directly to product marketing, because the findings should shape what you say, to whom, and through which channels.

Performance and tracking: whether the strategy is working, where it is underperforming, and what is driving the variance. This is the area most organisations handle reasonably well, because it is quantitative and familiar. The problem is that performance tracking without the other four types of research cannot tell you why something is happening, only that it is.

How Research Should Feed Into the Planning Cycle

Strategic planning cycles have natural moments where research input is most valuable. The challenge is to map the research programme to those moments rather than treating research as a standalone activity that happens whenever budget is available.

Before annual planning, the most valuable research is landscape-level. What has changed in the market over the past year? Where have competitors moved? What do customers now want that they did not want before? This is predominantly secondary research, supplemented by qualitative primary work where the secondary data is incomplete or ambiguous.

During proposition development, the most valuable research is customer-facing and specific. Concept testing, proposition ranking, message hierarchy testing. This is where qualitative methods earn their keep, because you are trying to understand not just what people prefer but why they prefer it and what trade-offs they are making.

Before launch, the most valuable research is validation-oriented. Does the proposition land as intended? Is the pricing credible? Are there objections that have not been addressed? The product marketing discipline has developed strong frameworks for pre-launch validation, and the organisations that use them consistently outperform those that launch on instinct.

Post-launch, the research shifts to tracking and diagnosis. Not just “is it working?” but “where is it working and where is it not, and what does that tell us about the underlying assumptions we made in planning?”

The organisations that do this well treat research as a continuous input to strategy rather than a periodic event. They have standing research programmes that generate a regular flow of market intelligence, supplemented by bespoke studies when specific strategic questions arise. This is harder to budget for and harder to manage, but it produces materially better decisions.

The Difference Between Research That Informs and Research That Drives

There is a meaningful distinction between research that informs strategy and research that drives it. Most organisations operate in inform mode. A smaller number operate in drive mode. The difference is not about the quality of the research. It is about the relationship between the research function and the decision-making function.

When I was growing an agency from around 20 people to closer to 100, one of the things I learned early was that the quality of our strategic recommendations was only as good as the quality of our market understanding. We could not afford to get that wrong. Clients were paying us to have a sharper view of their market than they had themselves, and the only way to sustain that was to build a research capability that was genuinely integrated into how we planned, not bolted on at the end to justify a recommendation we had already made.

That meant asking harder questions at the start of every engagement. What do we actually know about this market, and what are we assuming? Where are the assumptions that, if wrong, would invalidate the entire strategy? Those are the places to invest research effort, not in confirming things you are already confident about.

Research drives strategy when the findings have genuine authority to change the direction. That requires two things: research that is credible and well-constructed, and leadership that is genuinely open to being surprised. The second condition is harder to engineer than the first.

Where Research Goes Wrong in Practice

The methodological failures get the most attention in marketing literature. Leading questions. Unrepresentative samples. Poorly designed surveys. These are real problems, and they matter. But in my experience they are not the most common source of failure.

The most common failure is irrelevance. Research that answers a question nobody was asking, or answers the right question six months after the decision was made, or answers it in a format that the strategic planning team cannot use. I have reviewed research reports that ran to 200 pages and contained exactly three findings that were genuinely actionable. The other 197 pages were not wrong. They were just not useful.

A close second is the absence of a clear brief. Research commissioned without a precise question tends to generate broad findings. Broad findings require interpretation. Interpretation introduces the bias that the research was supposed to remove. The brief is not a formality. It is the most important document in the research process, because it determines whether the findings will be usable.

I spent a period early in my career building things myself when the budget was not there to commission them externally. The discipline that came from that, of having to understand exactly what you needed before you could build it, stayed with me. A research brief is the same thing. You have to know what decision you are trying to make before you can design the study that will help you make it better.

The third common failure is over-reliance on a single method. Quantitative research tells you what is happening at scale. Qualitative research tells you why. Secondary research tells you what the market looks like from the outside. Customer research tells you what it looks like from the inside. Competitive intelligence tells you where you stand relative to alternatives. No single method covers all of that ground, and strategies built on a single research type tend to have blind spots that only become visible after launch.

Research and the Proposition: Where the Connection Is Most Direct

Of all the places where research connects to strategy, the proposition is the most direct. The proposition is the claim you make about what you offer and why it matters. It sits at the intersection of what you can credibly deliver, what customers actually value, and what competitors are not already owning. Research is the only reliable way to map all three of those dimensions accurately.

Customer research, done well, surfaces the language customers use to describe their problems and their ideal outcomes. That language is not just insight. It is copy. It is the basis for messaging that resonates because it reflects how the audience thinks, not how the marketing team thinks. Understanding how customers adopt and engage with products shapes not just what you say but how and when you say it.

Proposition testing research identifies which elements of the offer land and which do not. It surfaces the objections that need to be addressed before they become barriers to conversion. It tells you whether the differentiation you believe you have is actually visible to the people you are trying to reach.

When I have judged marketing effectiveness awards, the campaigns that stand out are almost always the ones where the strategy was clearly grounded in genuine market understanding. You can see it in the precision of the targeting, the relevance of the messaging, and the coherence between what the campaign promised and what the product delivered. That coherence does not happen by accident. It is the product of research that was taken seriously and acted on.

The broader context for all of this sits within the product marketing discipline. If you are working through how research connects to positioning, launch, and competitive strategy, the Product Marketing hub covers each of those areas with the same commercial grounding.

Making Research Count: The Practical Standard

The practical standard for research in strategic planning is simple to state and harder to apply: every piece of research should be traceable to a decision it influenced. If you cannot draw that line, the research was not working hard enough.

That means commissioning research with a decision in mind, not a topic. It means involving the people who will make the decision in the design of the study, so they trust the findings when they arrive. It means presenting findings at the point when the decision is still open, not after it has been made. And it means being willing to act on findings that are inconvenient, not just the ones that confirm what you already believed.

For product launches, this discipline is particularly important. The cost of launching the wrong proposition, to the wrong segment, with the wrong message, at the wrong price, is substantial. Launch strategy built on solid research is not a guarantee of success, but it is a materially better starting position than launch strategy built on internal consensus and optimism.

Research is not expensive relative to the cost of getting strategy wrong. The organisations that treat it as a discretionary line item rather than a core input to planning are making a false economy. The ones that treat it as a genuine strategic asset, and build the processes to use it properly, make consistently better decisions. That is not a complicated argument. It is just one that requires discipline to act on.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the role of market research in strategic planning?
Market research reduces the cost of being wrong in strategic decisions. It replaces assumption with evidence on market size, customer behaviour, competitive positioning, and proposition relevance. Its value is not in the research itself but in how directly it connects to decisions that would have been made differently without it.
When in the planning cycle should market research happen?
Research is most valuable when it precedes the decisions it is meant to inform. Before annual planning, landscape-level research identifies where the market has shifted. During proposition development, qualitative research shapes what you say and to whom. Before launch, validation research tests whether the strategy holds up under realistic conditions. Post-launch, tracking research diagnoses what is working and why.
Why does market research so often fail to influence strategy?
The most common failure is timing: research delivered after decisions have already been made cannot change them. The second is relevance: research commissioned around a broad topic rather than a specific decision generates findings that are interesting but not actionable. The third is organisational: research teams and strategy teams operating in separate silos, with findings presented rather than integrated.
How do you write a good research brief for strategic planning?
A good research brief starts with the decision, not the topic. State precisely what choice is being made, what you currently believe, what would change your mind, and when the decision needs to be made. The methodology follows from the question, not the other way around. A brief that cannot answer “what would we do differently if the findings surprised us?” is not ready to commission.
What types of research are most useful for product marketing strategy?
Product marketing strategy typically requires a combination of customer research to understand decision-making and language, competitive intelligence to identify positioning opportunities, proposition testing to validate messaging, and market sizing to prioritise segments. No single method covers all of that ground. The most effective research programmes combine methods deliberately, matching each to the specific question it is best placed to answer.

Similar Posts