Market Research Studies: What They Tell You and What They Don’t

Market research studies are one of the most useful tools a marketing team can have, and one of the most misused. At their best, they reduce the cost of being wrong. At their worst, they create false confidence in decisions that were already made before the research began.

Understanding what a market research study can genuinely tell you, and where its limits are, is more valuable than knowing how to commission one. The methodology matters. So does the question you are trying to answer before you start.

Key Takeaways

  • Market research studies reduce the cost of being wrong, but only when the research question is defined before the methodology is chosen.
  • Primary research gives you original data you own. Secondary research gives you context. Both are necessary, but they answer different questions.
  • Survey design is where most market research fails. Leading questions, small samples, and self-selection bias can make findings look conclusive when they are not.
  • Qualitative research tells you why people behave the way they do. Quantitative research tells you how many do. Confusing the two is a common and expensive mistake.
  • The most dangerous output from a market research study is a number that sounds precise but was built on shaky methodology.

What Is a Market Research Study, Exactly?

A market research study is a structured process for gathering information about a market, audience, competitor, or behaviour. It can be as simple as a ten-question customer survey or as complex as a multi-wave quantitative study across several countries. The format varies. The purpose is always the same: to reduce uncertainty before making a decision.

The term gets used loosely. People call a Google search a market research study. They call pulling three competitor websites a competitive analysis. They call reading an industry report market intelligence. None of these are wrong, exactly, but they are not the same as a properly designed research study with a defined question, a clear methodology, and a representative sample.

I have been in rooms where a brand manager presented findings from a survey of 47 customers as if they were statistically significant. I have also seen agencies spend months and serious budget on research that told the client something they already knew. Both are failures of process, not intent. The discipline of market research is worth taking seriously precisely because it is so easy to do badly.

If you are building out your broader approach to market intelligence, the Market Research and Competitive Intel hub covers the full landscape, from tools and frameworks to how research connects to commercial strategy.

What Are the Main Types of Market Research Study?

There are two fundamental distinctions worth understanding before anything else: primary versus secondary research, and qualitative versus quantitative research. These are not interchangeable categories. They sit on different axes.

Primary research is data you collect yourself, directly from the source. Surveys, interviews, focus groups, ethnographic observation, usability testing. You own the data. You control the methodology. You can ask exactly the question you need answered. The cost is higher and the time investment is real.

Secondary research is data that already exists. Industry reports, published academic studies, government datasets, trade association figures, analyst briefings. It is faster and cheaper to access, but you are working with someone else’s methodology and someone else’s question. The data may not map cleanly to your specific market or audience.

Qualitative research explores depth. It tells you why people think or behave the way they do. Focus groups, in-depth interviews, and observational research all sit here. The samples are small by design. You are not trying to count things. You are trying to understand them.

Quantitative research explores scale. It tells you how many people do something, how often, and with what degree of statistical confidence. Surveys with large representative samples, A/B tests, and structured behavioural data all sit here. The value is in the numbers. The risk is in treating those numbers as more precise than the methodology warrants.

Most serious market research programmes use both. Qualitative research first to understand the territory, then quantitative research to measure it. Running them in the wrong order, or substituting one for the other, is where a lot of research budgets go wrong.

Where Do Market Research Studies Go Wrong?

I have judged the Effie Awards, which means I have read a lot of case studies where research was cited to justify creative and strategic decisions. Some of it was rigorous. Some of it was post-rationalisation dressed up as evidence. The difference matters, and experienced marketers can usually tell.

The most common failure is starting with the answer and working backwards. Someone in the business has already decided what they want to do. The research is commissioned to validate that decision, not to test it. The survey questions are written to produce the desired result. The sample is chosen because it is convenient, not because it is representative. The findings are presented with confidence intervals that the methodology does not support.

This is not always deliberate. Confirmation bias is real, and it operates quietly. The person writing the survey questions genuinely believes they are being neutral. The person reading the findings genuinely believes they are being objective. But the research has been shaped by the conclusion it was meant to reach, and the decisions made on its basis are no more informed than they would have been without it.

The second common failure is sample size and selection. A survey of 200 people from your existing customer email list tells you something about your existing customers. It tells you almost nothing about the broader market, about lapsed customers, or about people who chose a competitor. Self-selection bias means that people who respond to surveys are systematically different from those who do not. This does not make survey data useless. It makes it partial, and you need to account for that when you interpret it.

The third failure is question design. Leading questions, double-barrelled questions, and questions that assume a premise all produce unreliable data. Tools like Hotjar’s feedback tools show how even small changes in how a question is framed can shift response patterns significantly. This is not a digital-only problem. It applies to every survey format.

How Do You Design a Market Research Study That Produces Useful Findings?

Start with the decision, not the data. Before you design a single survey question, write down the decision you are trying to make and what information would change that decision. If you cannot answer that second part, you are not ready to commission research.

This sounds obvious. In practice, most research briefs I have seen describe what data the client wants to collect, not what decision the research is meant to inform. The result is research that produces interesting findings with no clear commercial application.

Once you have the decision defined, choose the methodology that fits the question. If you are trying to understand why customers churn, start with qualitative interviews. If you are trying to quantify how large a market segment is, you need a properly structured quantitative study. If you are trying to understand how users interact with a product page, behavioural data from tools like session recording will tell you more than a survey ever could.

Sample design is worth taking seriously. Who you include in the research, and how you recruit them, shapes what you find. A nationally representative sample requires deliberate effort. Convenience samples are faster and cheaper, but the findings need to be interpreted with that limitation in mind.

Question design should be tested before the survey goes live. Cognitive testing, where you ask a small number of people to talk through how they interpreted each question, catches problems that are invisible to the person who wrote the questions. It is a small investment that materially improves data quality.

Finally, decide in advance how you will analyse the data. Not after you see the results. If you define your analysis plan after the data comes in, you will find patterns that confirm what you already believed. Pre-registration of analysis plans is standard practice in academic research for exactly this reason. It is worth borrowing for commercial research too.

What Can Secondary Research Actually Tell You?

Secondary research is underrated when used correctly and dangerous when used carelessly. Industry reports from credible sources, government data, and published academic research can give you a solid foundation for understanding market size, growth trends, demographic shifts, and competitive dynamics. The work has already been done. The cost is low. The speed is high.

The limitation is that secondary research answers someone else’s question. An industry report on digital advertising spend tells you what the market looks like in aggregate. It does not tell you whether your specific audience in your specific category behaves the way the aggregate suggests. The further your situation is from the average, the less useful the aggregate data becomes.

I spent time early in my career at lastminute.com, where the travel and entertainment market was moving fast enough that last year’s industry data was often misleading by the time it was published. We relied heavily on our own behavioural data, our own testing, and our own customer conversations because the secondary sources could not keep up with how quickly the market was shifting. Secondary research set the context. Primary research answered the operational questions.

Secondary research is also where citation quality matters most. A statistic that has been passed from blog post to blog post, each time losing its original context, is not evidence. It is noise. If you cannot trace a number back to the original methodology, treat it with caution. Resources like detailed industry reporting that cite original sources are worth far more than aggregated summaries that do not.

How Do You Turn Research Findings into Commercial Decisions?

Research findings are not decisions. They are inputs to decisions. This distinction matters more than most marketing teams acknowledge.

I have seen research findings presented in ways that left the business with no clear path forward. The data was interesting. The analysis was competent. But the so-what was missing. What should the business do differently as a result of these findings? What assumption has been confirmed or challenged? What is the recommended next step?

Good research outputs answer three questions: what did we find, what does it mean for the business, and what should we do next. If a research report cannot answer all three, it has not finished the job.

The translation from insight to action is also where research gets filtered through organisational politics. Findings that challenge existing strategy tend to be downplayed. Findings that support existing strategy tend to be amplified. This is a cultural problem more than a research problem, but it is worth naming because it explains why organisations with substantial research budgets sometimes make decisions that the research should have prevented.

One practical approach is to present research findings alongside the decision they were meant to inform, and to be explicit about which findings change the recommended course of action and which do not. This keeps the research tethered to its commercial purpose rather than floating as an interesting document that gets filed and forgotten.

Frameworks like a structured competitor analysis template can help organise secondary research findings into a format that makes the commercial implications clearer. The structure forces you to connect data to decision, which is where most research processes fall short.

What Role Does Behavioural Data Play Alongside Traditional Research?

Traditional market research asks people what they think and what they do. Behavioural data shows you what they actually do. The gap between the two is often significant.

People are not reliable narrators of their own behaviour. They misremember. They tell you what they think you want to hear. They describe their ideal behaviour rather than their actual behaviour. This is not dishonesty. It is how human memory and social dynamics work. It means that survey data about behaviour needs to be treated differently from survey data about attitudes or preferences.

Behavioural data from analytics platforms, session recording tools, and conversion testing fills this gap. It does not tell you why people do things. But it tells you what they do with a precision that self-reported data cannot match. The combination of behavioural data and qualitative research is particularly powerful: the behavioural data identifies what is happening, the qualitative research explains why.

Understanding how users interact with design and interface elements is a good example of where behavioural data outperforms survey data. You can ask users whether they found a page easy to use. Or you can watch what they do on the page and see exactly where they hesitate, where they drop off, and what they click. The second approach is more reliable and more actionable.

The broader point is that market research studies and digital behavioural data are not competing methodologies. They answer different questions. A mature research programme uses both, deliberately, with a clear view of which question each approach is best placed to answer.

How Much Should You Spend on Market Research?

There is no universal answer, but there is a useful frame: the cost of the research should be proportionate to the cost of the decision it is informing. Spending significant budget to research a small tactical decision is poor resource allocation. Spending nothing to research a major strategic decision is a different kind of poor resource allocation.

Early in my career, before I had budget for formal research, I got resourceful. I would read everything I could find, talk to customers directly, and use whatever data was already available in the business. The constraint forced a kind of discipline: if you cannot do comprehensive research, you have to be very clear about what question matters most and focus your limited resources there.

That instinct has stayed with me. Even when I was running agencies with access to proper research budgets, the first question was always: what is the minimum research investment that would meaningfully reduce the uncertainty in this decision? Sometimes that was a small qualitative study. Sometimes it was a full quantitative programme. Sometimes it was a structured review of data that already existed in the business.

The worst research spending I have seen is research commissioned to look thorough rather than to be useful. Large, expensive studies that produce thick reports, presented at a leadership offsite, and then filed. The research theatre is costly. The commercial value is close to zero.

If you want to build a research capability that genuinely informs commercial decisions, the Market Research and Competitive Intel hub is a good place to map out what that looks like across tools, methods, and strategic application.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between primary and secondary market research?
Primary research is data you collect directly, through surveys, interviews, or observation. You control the methodology and own the data. Secondary research uses data that already exists, such as industry reports, government datasets, or published studies. Primary research is more expensive and time-consuming but answers your specific question. Secondary research is faster and cheaper but may not map precisely to your market or audience.
How do you know if a market research study is reliable?
Look at the sample size, how participants were recruited, and whether the methodology matches the question being asked. A reliable study will be transparent about its limitations. Warning signs include very small samples presented as statistically significant, self-selected convenience samples described as representative, and findings that lack confidence intervals or margin of error disclosures.
When should you use qualitative research versus quantitative research?
Use qualitative research when you need to understand why people behave in a certain way, or when you are exploring a new area and do not yet know what questions to ask at scale. Use quantitative research when you need to measure the size of a behaviour, attitude, or segment with statistical confidence. The two approaches work best in sequence: qualitative first to understand the territory, quantitative to measure it.
What is the most common mistake in market research survey design?
Leading questions are the most common problem. A leading question steers the respondent toward a particular answer, either through the framing of the question or the order of the response options. Other common issues include double-barrelled questions that ask two things at once, questions that assume a premise the respondent may not share, and response scales that are not balanced or consistent throughout the survey.
How do you translate market research findings into business decisions?
Start by connecting every finding back to the decision the research was commissioned to inform. A useful research output answers three questions: what did we find, what does it mean for the business, and what should we do next. If a research report cannot answer all three, the analysis is incomplete. Presenting findings alongside the original decision question keeps the research commercially grounded rather than academically interesting.

Similar Posts