Market Research That Changes Decisions
Market research is the process of gathering and analysing information about your customers, competitors, and market conditions to inform business decisions. Done well, it reduces the cost of being wrong. Done poorly, it creates the illusion of certainty while leaving the real questions unanswered.
Most businesses perform some version of market research. Far fewer use it to change anything. That gap, between research as ritual and research as decision-making input, is where most of the value gets lost.
Key Takeaways
- Market research only has value when it is tied to a specific decision. Research without a decision to inform is expensive documentation.
- Primary research tells you what people say. Behavioural data tells you what people do. You need both, and you should weight them differently depending on the question.
- Relative performance matters more than absolute performance. A market growing at 20% while your business grows at 10% is not a success story, regardless of how the numbers look in isolation.
- The most dangerous research output is false precision: a number that looks authoritative but is built on assumptions nobody has examined.
- Research programmes fail most often not because of methodology, but because findings are never connected to anyone with the authority or appetite to act on them.
In This Article
- What Is Market Research Actually For?
- How Do You Define the Research Question?
- What Are the Main Types of Market Research?
- How Do You Conduct Primary Research Without Wasting Budget?
- How Do You Use Secondary Research Without Over-Relying on It?
- How Do You Analyse and Interpret Research Findings?
- How Do You Connect Research to Business Decisions?
- What Does a Practical Market Research Process Look Like?
I spent a long stretch of my career running agencies, and one thing that surprised me early on was how often clients would commission research and then file it. Not because the research was bad. Because the organisation had no mechanism to absorb what it found. The research answered the question asked. It just did not answer the question that mattered, which was: what do we do differently now?
What Is Market Research Actually For?
Before you design a research programme, you need to be honest about why you are doing it. There are broadly three legitimate reasons to conduct market research: to reduce decision risk, to identify opportunity, or to understand why something is not working. Each of these requires a different approach, different data sources, and different outputs.
What is not a legitimate reason: to validate a decision already made. Confirmation-seeking disguised as research is one of the most expensive habits in marketing. It produces outputs that look credible, get presented in decks, and change nothing because they were never designed to challenge anything.
The MarketingProfs piece on customer assumptions makes a point that has stayed with me: marketers consistently overestimate how well they understand their customers, and the gap between assumed understanding and actual understanding tends to be widest in organisations that research least. The businesses most confident they know their customers are often the ones with the most to learn.
If you want a broader view of the research landscape before going deep on methodology, the Market Research and Competitive Intelligence hub covers the full range of tools, approaches, and frameworks worth knowing.
How Do You Define the Research Question?
The research question is not the same as the business question. The business question might be: why is our conversion rate dropping? The research question might be: what are customers experiencing between first contact and purchase that is creating friction? Getting this distinction right shapes everything that follows.
A well-formed research question has three properties. It is specific enough to be answerable. It is connected to a decision someone has the authority to make. And it has a defined scope, meaning there is a point at which you will have enough information to act, rather than perpetually gathering more.
When I was working with a retailer across multiple markets, we were asked to understand why one region was underperforming. The initial instinct from the client was to run a brand tracker. Brand perception was the assumed culprit. We pushed back and reframed the research question around customer experience differences between the high-performing and low-performing regions. What we found had nothing to do with brand. It was a fulfilment and delivery experience problem that the marketing team had no visibility into. A brand tracker would have produced clean data pointing in entirely the wrong direction.
What Are the Main Types of Market Research?
Market research splits into two broad categories: primary research, which you commission or conduct yourself, and secondary research, which draws on existing data and published sources. Both have a role. Neither is sufficient on its own.
Primary research includes surveys, interviews, focus groups, ethnographic observation, and usability testing. Its strength is specificity: you can ask exactly the questions relevant to your situation. Its weakness is that it captures what people say, which is not always what people do. Survey respondents are polite, aspirational, and often inconsistent. They tell you they make rational decisions. Then they buy on impulse and return the product.
Secondary research includes industry reports, published data, competitor analysis, search trend data, social listening, and platform analytics. Its strength is scale and cost. Its weakness is that it is rarely built around your specific question. You are working with data someone else collected for a different purpose, and you need to be careful about how much interpretive weight you put on it.
The most useful research programmes combine both. Quantitative data tells you the shape of a problem. Qualitative research tells you why the shape exists. Running them in sequence, qual first to identify the hypotheses, quant second to size them, tends to produce the most actionable output.
Forrester’s writing on strategy versus technique is worth reading here. The distinction applies cleanly to research design: technique without strategy produces data. Strategy applied to technique produces insight.
How Do You Conduct Primary Research Without Wasting Budget?
The biggest waste in primary research is scale. Organisations default to large sample sizes because they feel more credible. In most cases, you can answer a qualitative question with eight to twelve in-depth interviews. You will reach saturation, the point where new conversations stop producing new themes, faster than you expect. Adding more respondents after that point adds cost and confidence, but rarely changes the finding.
For surveys, the quality of the question design matters far more than the sample size. Leading questions, double-barrelled questions, and Likert scales with ambiguous endpoints are endemic in DIY research. They produce data that looks precise and is structurally unreliable. If you are running surveys without someone who has been trained in questionnaire design reviewing them, the outputs should be treated with caution.
Recruitment is the other underestimated variable. Surveying your existing customers tells you about your existing customers. If your research question is about why people are not converting, or why people are choosing competitors, you need to recruit outside your current base. This costs more and takes longer, but the alternative is research that answers a question you did not ask.
I have seen six-figure research programmes produce findings that could have been generated by three hours of well-structured customer interviews. I have also seen small, scrappy qual programmes surface insights that reshaped a product roadmap. Budget is not a proxy for quality in research. Rigour is.
How Do You Use Secondary Research Without Over-Relying on It?
Secondary research is fast and cheap relative to primary, which makes it the default starting point for most market research programmes. That is fine, as long as you understand what you are working with.
Industry reports from firms like Forrester are useful for market sizing, trend identification, and benchmarking. They are less useful for understanding your specific customer in your specific context. The data is aggregated, often global, and built on methodologies you rarely get to interrogate. Use them to frame the landscape. Do not use them as a substitute for understanding your own market.
Search data is one of the most underused secondary research tools available. What people search for, how they phrase problems, what questions they ask, is a real-time signal of unmet need. Moz’s research blog has covered the methodology extensively. The principle is straightforward: search intent is revealed preference. People do not search for things they do not care about.
Social listening sits in a similar category. It captures language your customers actually use, complaints that never make it to a survey, and competitor weaknesses that are visible in public conversation. The limitation is selection bias: the people who post publicly about products are not representative of the broader customer base. Treat it as a signal, not a sample.
One thing I always push teams on with secondary research: check when it was published and who funded it. A market sizing report from three years ago in a category that has moved significantly is not a reliable input. And research funded by a vendor with a commercial interest in the finding should be read with appropriate scepticism, regardless of how authoritative the methodology section looks.
How Do You Analyse and Interpret Research Findings?
Analysis is where most research programmes lose their value. The data comes back, someone produces a summary deck, the findings get presented, and then the organisation returns to doing what it was already doing. This is not a research problem. It is an organisational problem. But it is one that good research design can partially mitigate.
The most important discipline in analysis is separating what the data says from what you want it to say. These are different things, and the gap between them is where bias lives. One technique that helps: before you look at the findings, write down what you expect to find. Then compare. Where the data confirms your expectations, probe harder. Where it contradicts them, those are the findings worth paying attention to.
Relative performance is something I come back to repeatedly in research interpretation. A business that grew revenue by 10% last year might look at that number and feel satisfied. But if the market grew by 20%, that 10% growth represents a loss of market share. Absolute numbers without market context are not just incomplete, they are actively misleading. When I have been in rooms where this distinction gets surfaced, it changes the tone of the conversation entirely.
False precision is the other analytical trap. A survey of 200 respondents telling you that 67.3% prefer option A over option B is not a finding. It is a number dressed up as a finding. The precision of the figure implies a level of reliability the methodology does not support. Good analysis presents findings with appropriate confidence levels and is honest about what the data cannot tell you.
How Do You Connect Research to Business Decisions?
Research that does not change a decision is a sunk cost with a nice presentation attached. The connection between findings and action needs to be designed in from the start, not bolted on at the end.
This means identifying, before the research begins, who will receive the findings, what decisions they are responsible for, and what the threshold is for changing course. If you cannot answer those questions, you are not ready to commission research. You are ready to commission a report.
The most effective research programmes I have seen operate with a decision brief alongside the research brief. The decision brief specifies: what are we deciding, when do we need to decide it, and what would change our current position? The research is then designed to answer those specific questions rather than to produce a comprehensive picture of everything knowable about the market.
Stakeholder involvement matters here too. Research findings land better when the people who need to act on them have been involved in shaping the questions. Not in a way that biases the methodology, but in a way that ensures the outputs are legible and relevant to the people with authority to act. Findings presented to an audience who had no input into the research design are easy to dismiss. Findings that emerge from a process stakeholders participated in are harder to ignore.
There is a broader point worth making about measurement culture. If you could retrospectively trace every piece of market research conducted in a business back to the decisions it influenced, most organisations would find the return on research investment concentrated in a small number of projects. The rest produced outputs that informed no decision and changed no behaviour. Fixing that is not a research methodology problem. It is a governance problem. But it starts with being honest about it.
What Does a Practical Market Research Process Look Like?
Stripped back to its essentials, a market research process has six stages. Each one is a checkpoint, not a formality.
1. Define the decision. What are you deciding, and when? This is the anchor for everything else. If you cannot state the decision clearly, the research will drift.
2. Form the research question. Translate the business decision into a specific, answerable research question. This is not the same step. The business question is strategic. The research question is operational.
3. Choose your methodology. Primary or secondary, qual or quant, or a combination. The methodology should follow the question, not the budget or the timeline, though both will constrain your options in practice.
4. Design the instruments. Surveys, discussion guides, search queries, data pulls. This is where bias enters most easily. Get a second set of eyes on anything customer-facing before it goes into field.
5. Collect and analyse. Be rigorous about separating data collection from interpretation. Analysis should be structured, not impressionistic. Document what the data says before you start explaining it.
6. Connect to action. Present findings in the context of the original decision. What does this mean for what we do next? Who owns the next step? By when?
This is not complicated. What makes it hard is organisational discipline: the willingness to follow the process when the findings are inconvenient, when the timeline is compressed, or when the answer is not the one that was hoped for.
For more on how research connects to competitive strategy and the tools that support ongoing intelligence gathering, the Market Research and Competitive Intelligence hub covers the full picture, from methodology to tooling to programme design.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
