Corporate Strategy Research: What Most Companies Get Wrong
Corporate strategy research is the process of gathering, analysing, and synthesising market, competitive, and organisational intelligence to inform decisions about where a company plays and how it wins. Done well, it reduces the gap between assumption and reality before money moves. Done poorly, it produces expensive documents that validate decisions already made.
Most companies sit closer to the second description than they would admit.
Key Takeaways
- Corporate strategy research is only valuable when it genuinely challenges internal assumptions, not when it is commissioned to confirm them.
- The most dangerous research is the kind that looks rigorous but is built on flawed framing , wrong question, credible-looking answer.
- Competitive intelligence is not the same as strategic insight. Knowing what rivals are doing tells you little about whether you should follow them.
- Most organisations underinvest in demand-side research and overinvest in internal analysis, which is the opposite of what growth requires.
- Strategy research should produce a decision, not a presentation. If the output is a deck with no clear implication, the research has not finished its job.
In This Article
- Why Most Corporate Strategy Research Fails Before It Starts
- The Difference Between Competitive Intelligence and Strategic Insight
- What Good Corporate Strategy Research Actually Looks Like
- The Framing Problem: How Wrong Questions Produce Confident Wrong Answers
- Where Organisations Underinvest in Strategy Research
- How to Make Strategy Research Commercially Useful
- The Role of External Perspective in Corporate Strategy Research
- What Corporate Strategy Research Cannot Do
Why Most Corporate Strategy Research Fails Before It Starts
The failure usually happens at the brief. Someone senior frames a question in a way that already contains the answer they are hoping for. The research team, internal or external, picks it up and runs. Six weeks later, a polished set of slides arrives that confirms the original hypothesis with enough data points to make it feel credible. The decision was already made. The research just provided cover.
I have been in rooms where this happened with budgets large enough to fund a small agency for a year. The tell is always the same: the brief is written in terms of a solution rather than a problem. “We need research to support our expansion into X market” is not a research brief. It is a mandate with a question mark stapled to the front.
Real strategy research starts with genuine uncertainty. The organisation does not know something important, and the research is designed to find out, with results that could go either way. If there is no scenario in which the research could come back and say “do not do this,” the research is not research. It is theatre.
This connects to something broader about how growth strategy actually works. If you want a grounded view of the frameworks and thinking that sit behind market entry and commercial expansion, the Go-To-Market and Growth Strategy hub covers it in more depth.
The Difference Between Competitive Intelligence and Strategic Insight
There is a category of corporate strategy research that organisations love because it feels safe: competitive benchmarking. Map the landscape, plot the players, identify the white space. It is a legitimate exercise. It is also frequently misused as a substitute for harder thinking.
Knowing what your competitors are doing tells you what has already been decided, usually by organisations with different resources, different histories, and different constraints. It tells you almost nothing about whether you should follow them. The companies that have shaped markets rarely did so by studying what everyone else was doing and finding a gap. They had a conviction about something the market had not yet priced in.
Early in my career I was heavily focused on lower-funnel performance metrics. Conversion rates, cost per acquisition, return on ad spend. The data was clean and the feedback loops were fast. What I underestimated was how much of that performance was simply capturing demand that already existed. We were good at being there when someone was ready to buy. We were not doing much to make more people ready to buy. Competitive benchmarking has the same problem. It tells you who is winning the existing game. It does not tell you whether the game is worth playing, or whether a different game is possible.
BCG’s work on go-to-market strategy in financial services makes this point in a different context: understanding what customers actually need, rather than mapping what competitors offer, tends to surface more durable strategic opportunities. The same logic applies across sectors. Their research on evolving customer needs is worth reading if you work in markets where the demand side is shifting faster than the supply side has noticed.
What Good Corporate Strategy Research Actually Looks Like
Good strategy research has four qualities that are easy to describe and surprisingly hard to deliver consistently.
First, it is structured around decisions, not topics. The research is commissioned because someone needs to make a specific choice and does not have enough information to make it confidently. The output is designed to inform that choice, not to produce a general overview of the category.
Second, it includes demand-side intelligence. Most corporate strategy research skews heavily toward internal analysis and competitive mapping, which are both supply-side exercises. The harder and more valuable work is understanding how customers actually think, what they value, how they make decisions, and what would have to change for them to behave differently. This is where market penetration thinking becomes relevant: before you can grow share, you need to understand why the people who are not buying from you have made that choice.
Third, it is honest about what it does not know. The most valuable section of any strategy research document is often the one that says: here is what we could not determine, here is why, and here is what that uncertainty means for the decision. Research that presents everything with equal confidence is usually hiding something.
Fourth, it has a clear implication. Not a recommendation necessarily, though that is often useful. But a clear statement of what the research means for the decision at hand. If you read the executive summary and cannot tell what the organisation should do differently as a result, the research has not done its job.
The Framing Problem: How Wrong Questions Produce Confident Wrong Answers
I judged the Effie Awards for several years. The Effies are about marketing effectiveness, which means you spend a lot of time reading cases where organisations are trying to prove that their work drove a business outcome. What you learn quickly is that the relationship between a marketing activity and a business result is almost never as clean as the case study makes it look. There are always confounding variables. There is always some selection in what gets reported.
Corporate strategy research has the same problem at a larger scale. The framing of the question shapes the methodology. The methodology shapes the data collected. The data collected shapes the conclusions. If the question is wrong at the start, you can execute every subsequent step with complete rigour and still end up with a confidently wrong answer.
This is why the most important skill in corporate strategy research is not analytical. It is the ability to interrogate the brief before accepting it. To ask: what decision is this actually serving? What would we need to believe for this research to be useful? What would change our minds? If the people commissioning the research cannot answer those questions, the research should not start yet.
Forrester’s work on go-to-market challenges in healthcare illustrates this well. Organisations in that sector often commission research to validate a product launch strategy, only to discover that the real obstacle is not market sizing or competitive positioning but the way buying decisions are made inside the customer organisation. Their analysis of device and diagnostics go-to-market struggles shows how often the wrong question gets expensive answers.
Where Organisations Underinvest in Strategy Research
There are three areas where I consistently see organisations spending less than they should, relative to the value at stake.
The first is pre-launch market research. Not the kind that asks people whether they would buy a hypothetical product (they always say yes), but the kind that tests whether the problem the product solves is actually experienced as a problem worth solving. BCG’s work on biopharma product launches captures this well: the organisations that execute successful launches tend to have done more work before the launch to understand how the market actually processes new options, not just whether the option is objectively better. Their framework for biopharma launch strategy translates into other sectors more than most people expect.
The second is research into non-customers. Most organisations spend their research budget understanding their existing customers better. That is useful. It is also limited, because existing customers have already made a decision in your favour. The more strategically valuable group is the people who could buy from you but have not. Understanding their decision-making tells you something about the ceiling of your current model, and what would need to change to raise it.
The third is ongoing market intelligence, as distinct from point-in-time research projects. Markets shift. Customer priorities shift. Competitive dynamics shift. Organisations that commission a strategy research project every three years and treat the findings as settled truth until the next project are operating on a lag that compounds over time. The organisations that build continuous intelligence into their operating rhythm make better decisions faster, not because they have more data but because they have fresher context.
How to Make Strategy Research Commercially Useful
When I was running agencies and working through turnarounds, the most useful research was always the kind that connected directly to a resource allocation decision. Not “here is an interesting picture of the market” but “here is why you should put more money here and less money there.” The commercial usefulness of research is almost entirely determined by how tightly it is connected to a specific decision with real stakes.
That means the people commissioning research need to be willing to say, before the research starts: if the findings point in direction A, we will do X. If they point in direction B, we will do Y. If you cannot complete that sentence, you are not ready to commission research. You are ready to commission thinking, which is a different engagement.
It also means being honest about the difference between research that informs a decision and research that validates one. Both exist. Both have legitimate uses. The problem is when the second is presented as the first. I have seen organisations spend significant money on what was essentially a confirmation exercise, then act surprised when the findings did not produce any new thinking. The research did exactly what it was designed to do. It was just designed to do the wrong thing.
Growth hacking literature makes a related point from a different angle. The best-documented examples of rapid growth, whether in SaaS or consumer, tend to involve organisations that ran genuine experiments with genuine uncertainty about the outcome, rather than activities designed to confirm a pre-existing view. Semrush’s breakdown of growth hacking examples shows that the common thread is not a particular tactic but a particular relationship to uncertainty: comfortable enough with it to test, disciplined enough to read the results honestly.
The same principle applies to strategy research at the corporate level. The organisations that use it well are the ones that have built a culture where being wrong about a hypothesis is acceptable, as long as you find out early and adjust. The ones that use it poorly are the ones where being wrong is politically costly, so the research is unconsciously shaped to avoid that outcome.
The Role of External Perspective in Corporate Strategy Research
There is a standing debate about whether strategy research should be done internally or externally. The honest answer is that it depends on what you are trying to find out and how much internal bias is likely to distort the process.
Internal teams have context that external researchers take weeks to acquire. They know the history, the politics, the constraints that do not appear in any briefing document. That context is genuinely valuable. It is also genuinely dangerous, because it makes it harder to see the market as it is rather than as the organisation has always understood it.
External researchers bring fresh framing and no stake in the answer. They are more likely to ask the question that everyone internally has stopped asking because it became uncomfortable. That is worth paying for, especially when the decision at stake is large and the internal consensus is strong. Strong internal consensus is not evidence of correctness. It is often evidence that the right questions have not been asked recently.
The most effective approach I have seen is a structured combination: internal teams define the decision and the constraints, external researchers design and execute the intelligence gathering, and the synthesis happens collaboratively. Neither side owns the conclusion. The conclusion is owned by the evidence.
There is more on how research connects to go-to-market execution across the Go-To-Market and Growth Strategy hub, including how to think about market entry, audience strategy, and the relationship between research and commercial planning.
What Corporate Strategy Research Cannot Do
Research can reduce uncertainty. It cannot eliminate it. This sounds obvious but is routinely forgotten in the way research is commissioned and used.
I remember early in my career sitting in a brainstorm that was supposed to be led by someone else. They had to leave unexpectedly and handed me the whiteboard pen with about thirty seconds of context. My internal reaction was something close to panic. But what I learned from that experience was that the absence of certainty does not have to mean the absence of direction. You make the best call you can with the information available, you stay alert to signals that suggest you are wrong, and you adjust.
Corporate strategy research is a tool for making better calls with better information. It is not a mechanism for removing the need to make calls. Organisations that treat it as the latter end up in analysis paralysis, commissioning more research to resolve the uncertainty that the last research project did not fully resolve, deferring decisions until the market has moved on without them.
The organisations that use research well understand that it is an input, not an answer. The answer still requires judgement. Research just makes that judgement less likely to be based on something that is obviously wrong.
That is, in the end, what good corporate strategy research is for: not certainty, but honest approximation. Not a guarantee, but a more defensible starting point. The decisions still have to be made by people willing to make them.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
