Market Research Best Practices That Change Decisions
Market research best practices are the discipline of collecting, interpreting, and applying customer and competitor intelligence in ways that reduce commercial risk and improve decision quality. Done well, research changes what you build, how you price it, and who you talk to. Done poorly, it confirms what you already believed and costs you time you did not have.
Most marketing teams do not have a research problem. They have an interpretation problem. The data exists. The insight does not.
Key Takeaways
- Research that does not change a decision was not research, it was validation. Build your process around decision-forcing questions, not open-ended curiosity.
- Primary and secondary research serve different purposes. Using one as a substitute for the other is how brands end up with expensive decks full of market-level data and no customer-level truth.
- The most common research failure is asking the wrong question with the right methodology. Fix the brief before you commission the work.
- Competitive intelligence is not a slide in a pitch deck. It is an ongoing operational input that should inform pricing, positioning, and product decisions in real time.
- Speed matters more than perfection. A directionally correct insight acted on quickly beats a comprehensive report that arrives after the decision was already made.
In This Article
- Why Most Market Research Fails Before the Fieldwork Starts
- What Is the Difference Between Primary and Secondary Research?
- How Do You Design a Survey That Produces Useful Data?
- What Makes Qualitative Research Worth the Investment?
- How Should You Approach Competitive Intelligence as an Ongoing Practice?
- What Are the Most Common Mistakes in Market Research Interpretation?
- How Do You Turn Research Into Decisions Rather Than Documents?
- What Does Good Research Infrastructure Look Like for a Mid-Sized Marketing Team?
Why Most Market Research Fails Before the Fieldwork Starts
I have sat in enough research debriefs to know that the problems almost never start in the data. They start in the brief. Someone decides they want to “understand the customer better” or “explore brand perception” without specifying what decision the research needs to support. That ambiguity costs money and produces reports that get filed, not used.
The single most important question in any research brief is: what will we do differently depending on what we find? If the answer is “nothing, we just want to understand the landscape,” you are not doing research. You are doing comfort-seeking. That is a legitimate activity sometimes, but it should not be dressed up as strategic intelligence.
When I was running an agency and we were pitching for a significant retained client, I pushed the team to do proper pre-pitch research on the prospect’s competitive position rather than relying on the standard “here is your industry” slide. We found a pricing gap that the client had not noticed. We built the pitch around it. We won the business. The research took two days. The insight was worth considerably more than that. The point is not that we were clever. The point is that we asked a specific commercial question and went looking for the answer, rather than assembling generic sector data and calling it insight.
If you want a broader grounding in how research fits into the full strategic picture, the Market Research and Competitive Intelligence hub covers the landscape from foundational methods through to applied competitive analysis.
What Is the Difference Between Primary and Secondary Research?
Primary research is data you collect directly: surveys, interviews, focus groups, usability testing, ethnographic observation. Secondary research is data someone else collected: industry reports, published studies, competitor filings, search trend data, social listening outputs.
Both have a place. Neither is a substitute for the other. The mistake I see most often is over-reliance on secondary research because it is cheaper and faster, combined with an assumption that market-level data tells you something useful about your specific customer. It rarely does. Market-level data tells you the size of the pond. Primary research tells you what the fish actually want.
Secondary research is where you should start, not finish. Use it to understand the territory, identify the gaps, and sharpen the questions you will take into primary fieldwork. If you go straight to a survey without doing that groundwork first, you end up asking questions that are either too broad or based on assumptions that do not hold.
There is also a practical point about credibility. When I was building the business case for a significant budget reallocation at a previous agency, I needed to demonstrate to the board that a particular channel was underperforming relative to its strategic value. Secondary data from industry sources gave me the market context. Primary data from our own client base gave me the specific evidence. The combination was far more persuasive than either would have been alone. BCG’s work on marketing investment decisions makes a similar point about the relationship between evidence quality and organisational confidence in resource allocation.
How Do You Design a Survey That Produces Useful Data?
Most surveys are too long, ask leading questions, and measure things that do not map to any commercial decision. Here is how to avoid the most common failures.
Start with the output, not the input. Before you write a single question, write the table or chart you expect to produce from the data. If you cannot describe what the output looks like, you are not ready to write questions. This sounds obvious. Almost nobody does it.
Keep surveys under ten minutes. Completion rates drop sharply beyond that point, and the quality of responses in the final third of a long survey is noticeably lower than in the first third. If you have more questions than a ten-minute survey can accommodate, you have too many questions. Prioritise ruthlessly.
Avoid double-barrelled questions. “How satisfied are you with the quality and price of our service?” is two questions. Split them. Avoid leading questions. “How much do you agree that our product is better than competitors?” is not a neutral question. Rewrite it. Avoid jargon. Your customers do not speak your internal language. Write questions the way a customer would speak, not the way your brand guidelines would.
Include at least one open-ended question. Quantitative data tells you what is happening. Open-ended responses tell you why. The verbatim comments from a well-constructed open question are often the most valuable thing in the entire survey. Do not sacrifice them for the sake of a cleaner data set.
What Makes Qualitative Research Worth the Investment?
Qualitative research is frequently dismissed by analytically-minded marketing teams because it is not statistically significant. That misses the point. Qualitative research is not trying to tell you how many people think something. It is trying to tell you why they think it, and what language they use to describe it.
The language point is underrated. When I was working with a financial services client on a brand repositioning, the qualitative interviews revealed that customers consistently described their relationship with the brand using the word “complicated.” Not “complex,” not “difficult,” not “confusing.” Complicated. That word showed up in nine out of twelve interviews. We built the repositioning brief around simplicity as a direct response to that specific word. The creative work landed because it was speaking in the customer’s own language, not ours.
Good qualitative research requires good moderation. A moderator who leads witnesses, or who does not probe beyond the first answer, will produce data that reflects their assumptions rather than the participant’s reality. If you are doing this in-house, invest in moderator training before you invest in more participants.
Five to eight interviews on a well-defined question will often surface the dominant themes. You do not need fifty. You need the right people and the right questions. Diminishing returns in qualitative research set in earlier than most people expect.
How Should You Approach Competitive Intelligence as an Ongoing Practice?
Competitive intelligence is not a quarterly slide deck. It is an operational input that should be running continuously in the background of any serious marketing function. The brands that get surprised by competitor moves are almost always the ones that treat competitive research as a project rather than a process.
The basics are not complicated. Set up monitoring for competitor brand mentions, pricing changes, job postings (which signal strategic intent), product announcements, and content output. Tools like SEMrush make it straightforward to track how audiences are engaging with competitor properties across channels. The data is available. The discipline is in reviewing it regularly and asking what it means for your own positioning.
Job postings are particularly underused as a competitive intelligence source. If a competitor suddenly starts hiring performance marketing managers in three new markets, that tells you something about their growth strategy. If they are hiring data scientists at scale, that tells you something about where their product is heading. This is not espionage. It is reading what is publicly available with commercial attention.
The mistake I see most often in competitive intelligence is conflating activity with intent. A competitor running more paid search ads does not necessarily mean they are growing. It might mean they are defending. A competitor going quiet on social does not mean they are struggling. It might mean they have shifted budget to channels you cannot see as easily. Always ask what the data could mean, not just what it appears to say.
There is a useful historical lesson here. Microsoft’s failure to engage with Google’s rise is a case study in what happens when competitive intelligence is filtered through internal assumptions rather than external evidence. The signals were there. The interpretation was wrong because it was shaped by what Microsoft wanted to be true.
What Are the Most Common Mistakes in Market Research Interpretation?
Collecting data is the easy part. Interpreting it honestly is where most teams struggle. Here are the failure modes I have seen most consistently across twenty years of agency and client-side work.
Confirmation bias is the most pervasive. Teams go into research with a conclusion already in mind and find ways to read the data as supporting it. The antidote is to actively look for evidence that contradicts your hypothesis before you look for evidence that supports it. If you cannot find any, you are probably not looking hard enough.
Averaging out important variation is the second most common problem. A net promoter score of 45 looks healthy until you segment it and discover that your most commercially valuable customer cohort is scoring you at 20 while your least profitable cohort is scoring you at 70. The aggregate hides the story. Always cut your data by the segments that matter commercially, not just the segments that are easy to report.
Treating claimed behaviour as actual behaviour is a consistent trap in survey research. People do not always do what they say they do. They are not lying, they are approximating. They are telling you what they think they do, or what they think they should do. If you need to know actual behaviour, observe it or measure it directly rather than asking about it. Behavioural data from your own platforms will almost always be more accurate than self-reported survey data on the same question.
Over-indexing on recency is another one. A spike in search interest for a particular topic in the last thirty days does not necessarily indicate a durable trend. It might be seasonal, it might be driven by a single news event, it might be an anomaly. Context matters. A single data point is not a trend. Three data points in the same direction over a meaningful time period starts to look like one.
How Do You Turn Research Into Decisions Rather Than Documents?
The graveyard of marketing functions is full of research reports that were thorough, well-presented, and completely ignored. The problem is usually structural. Research gets commissioned, delivered, and filed because there is no mechanism connecting the findings to specific decisions that need to be made.
The fix is to connect every research project to a named decision owner and a decision deadline before the fieldwork starts. Someone needs to be accountable for acting on the findings. If nobody is accountable, nobody acts. This sounds like process for the sake of process, but it is actually the difference between research that earns its budget and research that does not.
Presentation format matters more than most people admit. A 60-slide research deck will not change decisions. A two-page summary with three specific recommendations and the evidence for each one might. I made a rule in my agency that any research debrief had to include a slide called “What we should do differently.” Not findings. Not implications. Specifically, what we should do differently. That framing forced the team to translate insight into action rather than stopping at observation.
Speed is also a genuine virtue in research. A directionally correct answer that arrives in time to inform a decision is worth more than a comprehensive answer that arrives after the decision has already been made. This does not mean cutting corners on methodology. It means being ruthless about scope. Do the research that answers the decision-forcing question. Do not do the research that would be interesting to know.
There is also a point worth making about the relationship between research and organisational confidence. Teams that do research well tend to make faster decisions, not slower ones, because they have a shared evidential basis for the call. Teams that do research poorly tend to make slower decisions because every discussion becomes an argument about whose assumption is correct. Good research is not a brake on speed. It is the thing that makes speed possible.
The best thinking on research process, competitive intelligence, and strategic planning is collected in the Market Research and Competitive Intelligence section of The Marketing Juice. If you are building or rebuilding a research function, it is worth working through the full set of resources there.
What Does Good Research Infrastructure Look Like for a Mid-Sized Marketing Team?
You do not need a dedicated research team to do this well. You need a clear process, a small set of reliable tools, and a habit of asking the right questions before you commission work.
For most mid-sized marketing teams, the research stack needs to cover four areas: search intelligence, competitive monitoring, customer voice, and market sizing. Search intelligence tools give you demand signals and content gap data. Competitive monitoring gives you visibility on what competitors are doing across channels. Customer voice comes from your own survey and interview programmes. Market sizing comes from secondary sources, industry reports, and analyst coverage.
You do not need to spend heavily on all four simultaneously. Start with search intelligence because it is the most immediately actionable and the most cost-effective to maintain. Add competitive monitoring next, because the compounding value of an ongoing intelligence feed outweighs the upfront cost relatively quickly. Build your customer voice programme around the specific decisions you need to make in the next twelve months, not around a general desire to understand customers better.
The tools matter less than the process. I have seen teams with access to every major research platform produce almost no usable insight, and I have seen teams with a spreadsheet and a well-designed survey programme produce work that genuinely changed strategic direction. The discipline is in the questions you ask and the rigour you bring to interpretation, not in the sophistication of the platform.
When I was scaling an agency from a small team to something significantly larger, the research function was one of the last things we formalised, which in retrospect was a mistake. We were making positioning and pricing decisions on the basis of what felt right rather than what the evidence suggested. Some of those calls were fine. Some were not. The ones that were not would have been caught earlier with a more systematic approach to gathering and interpreting market signals.
The best thing I did when we eventually built the function properly was to separate the people responsible for collecting data from the people responsible for interpreting it. Not because the skills are incompatible, but because the same person tends to interpret data in ways that are consistent with how they collected it. A second pair of eyes on the interpretation step catches things that the original analyst misses or rationalises away.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
