Market Research Best Practices That Change Decisions

Market research best practices are the methods and disciplines that turn raw data into decisions you can defend. Done well, research reduces the gap between what you assume about your customers and what is true. Done poorly, it produces slide decks that confirm what the team already believed and gathers dust before the campaign launches.

The difference between the two is rarely budget or methodology. It is almost always rigour: how questions are framed, who is asked, what assumptions are tested, and whether anyone in the room has the courage to act on what they find.

Key Takeaways

  • Research that confirms existing assumptions is not research, it is expensive validation theatre. Build in mechanisms to surface what you do not want to hear.
  • The research question you start with is rarely the right one. Spend time defining the actual business decision the research needs to inform before designing a single survey.
  • Primary and secondary research serve different purposes. Mixing them without understanding which one you need leads to confused outputs and wasted spend.
  • Sample design is where most research goes wrong. A well-framed question asked to the wrong audience produces confident, misleading answers.
  • Research has no value until it changes something: a strategy, a brief, a budget allocation, or a product decision. If nothing changes after the findings are presented, the research failed.

Why Most Market Research Produces Noise Instead of Signal

Early in my agency career, a client commissioned a substantial piece of customer research before a brand relaunch. The findings came back broadly positive. Customers liked the brand, trusted the product, and said they would recommend it. The relaunch went ahead on that basis. Within six months, the campaign had underperformed against every commercial metric. When we went back and looked at the research, the problem was obvious: every question had been framed around brand perception, not purchase behaviour. Nobody had asked why customers were buying less often, or what was pulling them toward competitors. The research answered the questions it was asked, and nobody had asked the right questions.

This is the most common failure mode in market research. Not fraud, not incompetence, just a mismatch between the business problem and the research design. The team knew what they wanted to hear, so they built an instrument that would hear it.

Good research starts with a different discipline: defining the decision, not the topic. Before any methodology is chosen, the team should be able to answer one question clearly: what will we do differently depending on what this research finds? If the answer is “nothing, we just want to understand the market better,” that is not a research brief. That is curiosity, and curiosity is expensive.

For a deeper look at how research fits into a broader strategic intelligence process, the Market Research and Competitive Intel hub covers the full range of methods, frameworks, and practical approaches worth building into your planning cycle.

How to Frame a Research Question That Is Actually Useful

The framing of a research question determines everything downstream. A poorly framed question produces data that looks authoritative but cannot support a decision. A well-framed question gives you something to act on.

The most reliable test is to force the question into a binary. If your research question is “how do customers perceive our brand?” you cannot fail to get an answer, but you also cannot make a decision from it. If you reframe it as “is our brand’s premium positioning credible to customers who have not yet purchased from us?” you now have a question with a yes or a no, and a decision attached to each outcome.

This sounds obvious. In practice, I have sat in dozens of research debriefs across clients in retail, financial services, and B2B technology, and the question framing problem shows up every time. The brief is written by a marketing team, reviewed by an agency, and approved by a director, and nobody stops to ask: what are we going to do with this? The research gets commissioned, the fieldwork runs, the findings get presented, and then the deck sits in a shared drive while the team makes decisions on instinct anyway.

Three disciplines help here. First, write the decision before you write the brief. State explicitly: if the evidence suggests X, we will do Y. If it shows A, we will do B. Second, involve the people who will act on the findings in the design process. If the head of product has no input into a piece of customer research, do not be surprised when the findings feel irrelevant to the product roadmap. Third, build in a pre-mortem. Before fieldwork starts, ask the team: what finding would force us to change our plans? If the answer is nothing, the research design needs to go back to the drawing board.

Primary vs Secondary Research: Choosing the Right Tool

One of the most consistent mistakes I see, particularly in smaller marketing teams and early-stage businesses, is treating secondary research as a substitute for primary research. It is not. They answer different questions.

Secondary research, meaning published reports, industry data, competitor filings, and third-party analysis, tells you about the market. It tells you how big it is, how it is segmented, what the macro trends are, and how competitors are positioning themselves. Sources like Forrester and BCG produce category-level analysis that is genuinely useful for sizing opportunities and stress-testing assumptions about market direction. But it cannot tell you why your specific customers behave the way they do, what they actually value about your product, or what would make them switch.

Primary research, meaning surveys, interviews, focus groups, ethnographic observation, and behavioural data from your own platforms, tells you about your customers. It is more expensive, slower, and harder to design well, but it is the only source of insight that is specific to your business and your audience.

The practical discipline is to use secondary research to frame the hypotheses you then test with primary research. Read the market reports to understand the landscape. Form a view on where your customers might sit within it. Then design primary research to test whether that view holds. Most teams do one or the other. The ones that do both, in the right sequence, tend to produce research that actually changes something.

There is also a cost discipline here. When I was running agency operations, I would often push teams to exhaust secondary sources before commissioning primary fieldwork. Not because primary research is not valuable, it is, but because secondary research is almost always faster and cheaper, and it frequently surfaces the answer to the question you thought you needed to pay for.

Sample Design: Where Confidence Becomes Misleading

If the research question is where most briefs go wrong, sample design is where most fieldwork goes wrong. A well-constructed survey administered to the wrong people produces findings that look credible and are not.

The most common version of this problem is surveying existing customers when the business question is about acquisition. If you want to understand why people are not buying your product, asking the people who already buy it is, at best, only half the answer. Your existing customers have already self-selected. They have already overcome whatever barriers exist. Their answers will systematically underrepresent the friction that is stopping non-customers from converting.

I have seen this play out in category after category. A financial services client ran a customer satisfaction survey, found strong scores, and concluded that the product was not the problem. The problem was churn. But they had surveyed active customers, not lapsed ones. The people who had already left were not in the sample. The research was technically accurate and strategically useless.

Good sample design starts by defining the population you actually care about, not the one that is easiest to reach. Then it asks whether that population can be reached through the method you have chosen. Online panels are fast and cheap, but they skew toward certain demographics and engagement patterns. In-depth interviews are rich and flexible, but they are slow and expensive to scale. There is no universally correct method, only methods that are or are not appropriate for the question and the population.

Size matters less than most people think. A sample of 200 respondents who precisely match your target audience will outperform a sample of 2,000 who do not. Statistical significance is a useful concept, but it is frequently used to mask a more fundamental problem: that the right people were never in the sample to begin with.

Qualitative Research Is Not a Soft Option

There is a persistent bias in commercially-oriented marketing teams toward quantitative research. Numbers feel more defensible in a boardroom. Percentages and charts travel well through an organisation. Qualitative findings, by contrast, feel anecdotal, subjective, and hard to act on.

This bias is understandable and mostly wrong. Qualitative research, done well, surfaces the things that quantitative research cannot measure: the language customers use to describe a problem, the emotional logic behind a purchase decision, the friction that never shows up in a conversion funnel because it happens before the customer even arrives at your site.

When I was growing an agency from around 20 people to over 100, one of the most commercially useful things we did was run a series of in-depth interviews with clients who had left us. Not surveys, not NPS scores, actual conversations. What came back was not what the satisfaction data had suggested. The scores had been fine. The real issues were about communication cadence, the feeling of being handed off between account managers, and a perception that the senior team disappeared after the pitch. None of that would have surfaced in a structured survey. It changed how we ran client relationships for years afterward.

The discipline in qualitative research is to resist the urge to over-interpret. Six interviews do not constitute a finding. They constitute a hypothesis worth testing at scale. Use qualitative work to generate the questions, then use quantitative work to measure the answers. The two methods are complements, not competitors.

How to Avoid Confirmation Bias in Research Design

Confirmation bias is the single biggest threat to the integrity of market research, and it operates at every stage of the process. It shapes which questions get asked, how they are worded, which findings get emphasised in the debrief, and which ones get quietly buried.

The structural problem is that the people who commission research usually have a view they want confirmed. A marketing director who has already committed to a brand repositioning does not want research that says the current positioning is working. A product team that has spent six months building a new feature does not want to hear that customers do not value it. The research gets framed, consciously or not, to produce the answer the sponsor needs.

There are practical ways to reduce this. The most effective is to separate the people who design the research from the people who have a stake in the outcome. If that is not possible, at minimum have someone outside the project review the questionnaire or discussion guide before fieldwork starts, specifically looking for leading questions, false dichotomies, and loaded framing.

Another useful discipline is to require the team to pre-specify what a negative finding would look like before the fieldwork runs. If you cannot describe what a bad result would look like, you have not designed a real test. You have designed a ritual.

The problem-agitate-solve framework is a useful lens here, not for writing copy, but for thinking about research design. Start with the problem you are trying to solve. Be honest about the pain it causes when you get it wrong. Then design the research to solve it, not to avoid it.

Turning Research Findings Into Decisions

The most underrated skill in market research is not methodology. It is synthesis. The ability to look at a body of data, qualitative and quantitative, primary and secondary, and extract the two or three things that actually matter for the decision at hand.

Most research reports do the opposite. They present everything. Every cross-tab, every verbatim, every sub-group finding. The people who receive the report are left to do the synthesis themselves, and they almost never do. They read the executive summary, note the headline numbers, and file the rest.

Good research outputs are built around decisions, not data. The structure should be: here is what we found, here is what it means for the decision we were trying to make, and here is what we recommend. That is it. If the research was designed to answer a specific question, the output should answer that question directly, with the supporting evidence available for anyone who wants to interrogate it.

One of the disciplines I brought into agency planning processes was requiring every research debrief to end with a “so what” slide. Not a summary of findings, a statement of implications. What should we do differently as a result of this research? If the team could not answer that question, the research had not done its job, regardless of how rigorous the methodology was.

This connects to a broader point about how research fits into a planning process. Research is not a standalone activity. It is an input into a decision. The value of that input is determined entirely by whether it changes the decision. If the strategy, the brief, or the budget allocation looks exactly the same after the research as it did before, something has gone wrong, either in the design, the synthesis, or the willingness of the team to act on what they found.

Understanding how to commission, interpret, and act on research is one of the more durable strategic skills in marketing. The Market Research and Competitive Intel hub brings together the frameworks and thinking that make that skill practical rather than theoretical.

Behavioural Data Is Not a Substitute for Understanding

One of the more seductive arguments in modern marketing is that behavioural data has made traditional research obsolete. Why ask customers what they think when you can watch what they do? Clickstream data, purchase history, A/B test results, heatmaps: all of it tells you what happened. None of it tells you why.

This matters because the “why” is where the strategic insight lives. You can see from your analytics that customers are dropping off at a particular point in the checkout flow. The data tells you the problem exists. It does not tell you whether the problem is a UX issue, a pricing concern, a trust deficit, or a distraction caused by a competing offer they just received. Each of those diagnoses implies a different solution. Getting the diagnosis wrong means solving the wrong problem with confidence.

I spent years managing significant volumes of paid media spend across multiple industries, and the teams that performed best were consistently the ones that combined behavioural data with genuine customer understanding. They used the data to identify where problems existed and primary research to understand why. The teams that relied exclusively on behavioural data tended to optimise their way into local maxima, making things incrementally better without ever questioning whether they were optimising the right thing.

Behavioural data is a powerful input. It is not a replacement for talking to customers. The organisations that understand this distinction tend to produce better research, better briefs, and better marketing.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the most important step in market research?
Defining the decision the research needs to inform is more important than any methodological choice. Before commissioning fieldwork, the team should be able to state clearly what they will do differently depending on what the research finds. Without that discipline, research tends to produce data that describes the market without changing how the business operates within it.
When should you use qualitative versus quantitative research?
Qualitative research is best used to generate hypotheses: to understand the language customers use, the emotional logic behind decisions, and the friction points that do not show up in structured data. Quantitative research is best used to test those hypotheses at scale and produce findings that can be generalised across a population. The two methods work best in sequence, with qualitative work informing the design of quantitative instruments.
How do you avoid confirmation bias in market research?
The most practical approach is to separate research design from research sponsorship. Have someone without a stake in the outcome review the questionnaire or discussion guide before fieldwork starts. Require the team to pre-specify what a negative finding would look like. If the research cannot produce a result that would change the plan, it has not been designed as a genuine test.
What is the difference between primary and secondary market research?
Secondary research uses existing published sources, such as industry reports, competitor analysis, and third-party data, to understand the market at a category level. Primary research involves collecting new data directly from your target audience through surveys, interviews, or observation. Secondary research is faster and cheaper and is best used to frame hypotheses. Primary research is more expensive but produces insights specific to your business and your customers.
How do you know if market research has been successful?
Research is successful when it changes a decision. If the strategy, brief, or budget allocation looks the same after the research as it did before, the research either failed to surface genuinely new information, or the team lacked the willingness to act on what it found. Success is not measured by the quality of the methodology or the size of the sample. It is measured by whether the findings influenced what the business actually did next.

Similar Posts