Market Research Types That Change Decisions
Market research is the discipline of gathering and interpreting information about customers, competitors, and markets to inform business decisions. The main types fall into two broad categories: primary research, which you collect yourself, and secondary research, which draws on existing data. Within those, the methods range from quantitative surveys and sales data to qualitative interviews and ethnographic observation.
Knowing the taxonomy matters less than knowing which type answers your specific question. The wrong research method applied to the right question produces confident nonsense, and I have seen that happen more times than I care to count across 20 years of agency work.
Key Takeaways
- Primary and secondary research serve different purposes: primary answers questions nobody has asked yet, secondary tells you what is already known.
- Qualitative research explains the “why” behind behaviour; quantitative research tells you how many people behave that way. You need both to make confident decisions.
- The most common market research mistake is choosing a method before defining the question , surveys cannot replace customer conversations, and focus groups cannot replace sales data.
- Competitive intelligence is a form of market research that most product teams underinvest in, leaving them to discover competitor positioning only after launch.
- Research quality is determined by how it changes decisions, not by sample size or methodology sophistication.
In This Article
- Why Most Teams Pick the Wrong Research Method
- Primary vs Secondary Research: What the Distinction Actually Means
- Qualitative Research: The Methods That Explain Behaviour
- Quantitative Research: The Methods That Measure Scale
- Competitive Intelligence: The Research Type Most Teams Neglect
- Secondary Research: What Existing Data Can and Cannot Tell You
- Customer Research: The Methods Closest to Commercial Truth
- Pricing Research: A Specialist Area Most Teams Get Wrong
- How to Choose the Right Research Type for Your Question
- The Relationship Between Research and Sales Enablement
- What Good Market Research Actually Looks Like in Practice
Why Most Teams Pick the Wrong Research Method
Early in my career, I watched a brand team commission a large-scale quantitative survey to answer the question: “Why are customers churning?” They got back statistically significant data showing that 42% of churned customers cited “price” as the main reason. The team accepted this, reduced prices, and churn barely moved. When we eventually ran depth interviews with a small sample of the same cohort, the real answer emerged: the product was confusing to use and customers felt embarrassed to admit that on a survey. Price was the socially acceptable answer.
This is not a failure of market research. It is a failure of method selection. Surveys are blunt instruments when the question requires nuance. The team chose quantitative because it felt more credible, not because it was more appropriate.
Choosing the right research type starts with being honest about what you actually need to know. There are four fundamental questions that drive method selection: What is happening? Why is it happening? How many people does it affect? What will they do next? Each of those maps to different tools.
If you are working through broader product marketing questions, the product marketing hub covers the full commercial picture, from positioning and pricing to launch strategy and competitive analysis.
Primary vs Secondary Research: What the Distinction Actually Means
Primary research is data you collect directly, for a specific purpose, from a specific audience. Secondary research is data that already exists, collected by someone else for their own purposes, which you repurpose for yours.
Both are legitimate. Neither is inherently superior. The practical difference is cost, speed, and specificity. Secondary research is cheaper and faster but less tailored. Primary research is slower and more expensive but answers your exact question. In most commercial projects, you should start with secondary research to understand the landscape, then commission primary research to fill the gaps that matter.
When I was growing an agency from 20 to over 100 people, we could not afford to commission primary research for every pitch. We got very good at synthesising secondary sources: industry reports, competitor case studies, platform data, and publicly available consumer research. That desk research shaped our strategic thinking before we ever spoke to a client’s customers. It also meant that when we did run primary research, we were asking sharper questions because we already knew what the secondary data could not tell us.
Qualitative Research: The Methods That Explain Behaviour
Qualitative research is concerned with meaning, not measurement. It tells you what people think, feel, and believe, and why they make the choices they make. It does not tell you how many people think that way. That is not a weakness. It is a design feature.
In-depth interviews are the most valuable qualitative tool in product marketing. A skilled interviewer can surface assumptions, anxieties, and decision-making processes that no survey would ever capture. what matters is to interview people who have recently made the decision you care about, whether that is purchasing, churning, upgrading, or ignoring your category entirely. Building accurate buyer personas depends almost entirely on the quality of these conversations, not on demographic data.
Focus groups are useful and overused. They are useful for exploring reactions to concepts, testing messaging, and understanding the vocabulary customers use to describe a problem. They are overused because they are visible, they feel productive, and they give stakeholders something to observe. The risk is groupthink: the most confident voice in the room shapes the data, and people moderate their honest opinions in social settings. I have sat behind one-way glass watching focus groups tell a client what they thought the client wanted to hear. That data is worse than no data.
Ethnographic research involves observing people in their natural environment rather than asking them to describe their behaviour in an artificial setting. For product teams, this might mean watching someone use your product in their actual workflow rather than in a lab. What people say they do and what they actually do are frequently different. Ethnographic methods close that gap.
Diary studies ask participants to record their thoughts, behaviours, or experiences over a period of time. They are particularly useful when the behaviour you are studying is habitual or spread across multiple touchpoints, because they capture real-time experience rather than reconstructed memory.
Quantitative Research: The Methods That Measure Scale
Quantitative research converts human behaviour and opinion into numbers. It answers questions of scale, frequency, and statistical significance. It is the tool you reach for when you need to know how many, how often, or how much.
Surveys are the most common quantitative tool and the most frequently misused. A well-designed survey with a representative sample and carefully constructed questions produces genuinely useful data. Most surveys are not well-designed. They are too long, use leading questions, rely on self-reported behaviour rather than observed behaviour, and are distributed to convenience samples that do not represent the target population. If you are running surveys, invest more time in questionnaire design than in sample size. A clean question answered by 200 people beats a leading question answered by 2,000.
Behavioural data is the most underrated form of quantitative research. Your CRM, your analytics platform, your transaction records, your product usage logs: these tell you what customers actually did, not what they say they would do. When I was running paid search at lastminute.com, we could see in near real-time how different audience segments responded to different offers. That behavioural data was more commercially useful than any survey we ran, because it reflected revealed preference rather than stated preference.
A/B testing is a specific form of quantitative research that isolates variables to measure causal impact. It is the closest thing to a controlled experiment that most marketing teams will run. The discipline required to run a valid A/B test, one variable at a time, sufficient sample size, appropriate duration, is more demanding than most teams realise. The temptation to call a winner too early, or to test too many variables simultaneously, produces results that are statistically meaningless even when they feel decisive.
Market sizing and segmentation analysis uses publicly available data, census data, industry reports, and modelling to estimate the size of a market and the distribution of potential customers within it. This is foundational for product launch planning. Product launch strategy built on accurate market sizing is categorically different from one built on optimistic assumptions.
Competitive Intelligence: The Research Type Most Teams Neglect
Competitive intelligence is systematic research into what your competitors are doing, why they are doing it, and what it means for your positioning. Most product teams treat it as a one-time exercise done before a launch, then never revisit it. That is a mistake.
Competitive research includes analysing competitor messaging and positioning, reviewing their pricing structures, monitoring their product releases, reading their customer reviews, tracking their advertising, and understanding their sales approach. Competitive analysis done well is not about copying what competitors do. It is about understanding where the market is going and where the gaps are.
I judged the Effie Awards for several years. One pattern I noticed in the submissions that did not win was that their market analysis was thin. They had researched their own customers reasonably well, but they had a shallow understanding of why customers chose competitors, or what the competitor’s actual value proposition was beyond surface-level messaging. Strong competitive intelligence would have sharpened both their positioning and their creative strategy.
A clear value proposition is almost impossible to write without a clear view of the competitive landscape. You cannot differentiate from a competitor you do not understand.
Secondary Research: What Existing Data Can and Cannot Tell You
Secondary research draws on data that already exists: industry reports, academic research, government statistics, trade publications, platform data, analyst reports, and aggregated consumer research. It is the starting point for almost every research project because it establishes what is already known.
The limitation of secondary research is that it was collected for someone else’s purpose. An industry report on consumer confidence tells you something about the general market, but it cannot tell you how your specific customer segment in your specific geography responds to your specific product category. The further your question is from the original research purpose, the less reliable the secondary data becomes.
Secondary research is also subject to publication bias and commercial interest. Vendor-sponsored research has a tendency to reach conclusions that are favourable to the vendor. Industry association reports have a tendency to paint the industry in a positive light. This does not make them useless, but it means you should read them critically and triangulate across multiple sources.
The most reliable secondary sources for market research are government statistical agencies, academic journals, and large-scale independent surveys. Platform data from Google, Meta, and similar sources is useful for understanding digital behaviour but reflects the platform’s own audience and should not be generalised beyond it.
Customer Research: The Methods Closest to Commercial Truth
Customer research is a subset of primary research focused specifically on understanding existing customers: why they bought, how they use the product, what they value, and what would make them leave. It is the most commercially grounded form of research available to a product team, and it is frequently underused because it requires access that other teams control.
Win/loss analysis is one of the most valuable customer research methods and one of the least common. It involves interviewing both customers who chose you and prospects who chose a competitor, to understand the actual decision-making process. Product marketing practitioners who have run rigorous win/loss programmes consistently report that the findings reshape their positioning, their sales enablement, and their product roadmap.
Net Promoter Score is a customer research tool that has been both overused and misunderstood. The score itself tells you very little. The follow-up question, asking why someone gave that score, is where the insight lives. NPS as a vanity metric is worthless. NPS as a structured listening programme, where the verbatim responses are coded and analysed systematically, is genuinely useful.
Customer advisory boards are a qualitative research tool that gives product teams ongoing access to a representative sample of customers. They work best when the conversation is genuinely exploratory rather than a product demo dressed up as research. Customers can tell the difference, and they disengage when they feel they are being sold to rather than listened to.
Pricing Research: A Specialist Area Most Teams Get Wrong
Pricing research is a specific application of market research that attempts to understand what customers are willing to pay and how price sensitivity varies across segments. It is one of the most commercially consequential forms of research and one of the most technically demanding.
The fundamental problem with pricing research is that people are poor predictors of their own price sensitivity. Ask someone what they would pay for a product and they will give you a number that reflects social desirability and rational self-interest rather than actual willingness to pay. Conjoint analysis and Van Westendorp price sensitivity models are more reliable because they force trade-offs rather than asking for direct price estimates.
Behavioural pricing research, which observes actual purchase decisions at different price points, is more reliable than stated preference research. If you can test pricing in market, even at small scale, the data you get is categorically more useful than any survey. Pricing strategy built on revealed preference rather than stated preference produces better commercial outcomes.
How to Choose the Right Research Type for Your Question
The selection framework is simpler than most research textbooks suggest. Start with the question, not the method.
If your question is exploratory (“What do customers think about this problem?”), use qualitative methods. Interviews and focus groups are appropriate. Sample sizes of 10 to 20 are sufficient to reach thematic saturation in most B2C contexts. In B2B, where the population is smaller and the purchase decision is more complex, even five to eight in-depth interviews can be highly informative.
If your question is descriptive (“How many customers experience this problem?”), use quantitative methods. Surveys, behavioural data, and market sizing are appropriate. Sample size matters here, and you need to think carefully about whether your sample is representative of the population you care about.
If your question is causal (“Does this change in the product increase conversion?”), use experimental methods. A/B testing, multivariate testing, and controlled pilots are appropriate. Observational data cannot answer causal questions, no matter how large the dataset.
If your question is competitive (“What is our competitor’s positioning and where are the gaps?”), use competitive intelligence methods. Desk research, mystery shopping, review analysis, and job posting analysis are appropriate. This is secondary research, but it requires active synthesis rather than passive reading.
The mistake I see most often is teams defaulting to surveys for every question type. Surveys are the path of least resistance: they are easy to commission, they produce numbers, and they give stakeholders something that looks like data. But they are only the right tool for a narrow range of questions. The discipline of matching method to question is what separates research that changes decisions from research that fills slide decks.
The Relationship Between Research and Sales Enablement
Market research does not just inform product and marketing decisions. It directly improves the quality of sales conversations. When your sales team understands the decision-making process your buyers go through, the objections they raise, and the language they use to describe the problem, they close more deals. Sales enablement built on customer research is qualitatively different from sales enablement built on internal assumptions about what buyers care about.
The connection between research and sales performance is one of the clearest examples of research producing a measurable commercial outcome. It is also one of the least discussed, because market research is typically owned by marketing and sales enablement is typically owned by sales, and the two functions do not always share information as efficiently as they should.
Product marketing sits at the intersection of these two functions. If you are thinking about how research connects to positioning, messaging, and go-to-market execution, the product marketing section of The Marketing Juice covers those connections in detail, including how to structure research programmes that feed both product development and commercial teams.
What Good Market Research Actually Looks Like in Practice
Good market research is not defined by its methodology. It is defined by whether it changes decisions. If a research programme produces a report that everyone reads, agrees with, and then ignores, it has failed regardless of how technically rigorous it was.
The research programmes I have seen produce the most commercial impact share a few characteristics. They start with a specific, answerable question. They use the method appropriate to that question. They are designed from the outset with a decision in mind, meaning the team has agreed in advance what they will do differently depending on what the research finds. And they are communicated in a way that makes the insight actionable rather than merely interesting.
The last point is more important than it sounds. I have read research reports that were intellectually impressive and commercially inert, because they presented findings without implications. The job of market research is not to describe the world. It is to give the people making decisions a clearer view of the world so they can make better ones. That requires the researcher to go one step further than the data and say: given this, here is what it means for the decision you are facing.
When I taught myself to code in my first marketing role because the MD would not give me budget for a new website, I was doing a form of applied research: I was testing what was possible within real constraints, learning by doing, and producing something that worked rather than something that looked good in a proposal. That mindset, of being oriented toward decisions and outcomes rather than process and methodology, is what separates useful market research from expensive wallpaper. Product launch planning that is grounded in this kind of research produces better results than planning built on assumptions, regardless of how sophisticated the launch strategy looks on paper.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
