Market Research Strategies That Shape Decisions

Market research strategies are the frameworks and methods marketers use to gather, analyse, and act on information about customers, competitors, and markets. Done well, they reduce commercial risk and sharpen the decisions that matter most. Done poorly, they produce slide decks that confirm what everyone already believed and gather dust on a shared drive.

The gap between those two outcomes is not about budget or tools. It is about how you frame the question before you start, and whether the people commissioning the research are genuinely prepared to act on what they find.

Key Takeaways

  • The research question you ask determines the quality of insight you get. Vague briefs produce vague findings.
  • Primary and secondary research serve different purposes. Mixing them without a plan creates noise, not clarity.
  • Most organisations over-invest in data collection and under-invest in synthesis. The insight is in the interpretation, not the spreadsheet.
  • Research that cannot be connected to a commercial decision is a cost, not an investment. Always define the decision it is meant to inform before you start.
  • Behavioural data tells you what people do. Attitudinal data tells you what they say. Neither is the full picture on its own.

Why Most Market Research Fails Before It Starts

I have sat in a lot of research debriefs over the years. The ones that land well share a common trait: the team commissioning the work knew exactly what decision they were trying to make before a single survey was sent or a single focus group was recruited. The ones that land badly almost always started with a vague mandate, something like “we want to understand our customers better” or “let’s get a sense of the market.”

That kind of brief produces research that is technically competent and commercially useless. You end up with a lot of demographic profiling, some satisfaction scores, a few verbatim quotes from customers who liked the product, and no clear steer on what to do differently. It is expensive, time-consuming, and it gives everyone in the room the comfortable feeling of having done something rigorous without actually changing anything.

The first discipline of good market research is defining the decision. Not “what do we want to know” but “what will we do differently depending on what we find?” If the answer to that question is “nothing, we just want the data,” then the research probably should not be commissioned at all.

Primary vs Secondary Research: Knowing When to Use Each

The distinction between primary and secondary research is well understood in theory and frequently ignored in practice. Primary research is data you collect yourself, surveys, interviews, usability tests, ethnographic observation. Secondary research is data that already exists, industry reports, competitor filings, search trend data, published consumer surveys.

The mistake I see most often is organisations defaulting to primary research when secondary would have answered the question faster and cheaper. If you want to understand the size of a market or the broad shape of consumer behaviour in a category, there is usually enough published data to get you oriented before you spend money on fieldwork. Secondary research should be your starting point, not an afterthought.

Primary research earns its cost when you need specificity that no published source can provide. What do your customers think about your specific product, your specific pricing, your specific messaging? What barriers exist at the point of purchase for your particular audience? That is where qualitative interviews, surveys with your own customer base, and behavioural testing tools start to pay their way.

The market research hub on The Marketing Juice covers the full landscape of tools and methods available to marketing teams, from competitive intelligence platforms to behavioural analytics. It is worth reading alongside this article if you are building out a research capability from scratch.

Qualitative Research: The Method That Gets Underused and Misread

Qualitative research has an image problem in commercially minded organisations. Because it does not produce numbers, it gets treated as soft. Because it involves small samples, it gets dismissed as anecdotal. This is a mistake.

Qualitative methods, depth interviews, focus groups, ethnographic research, diary studies, are the only way to understand the “why” behind behaviour. Quantitative data tells you that 40% of your customers abandon the checkout at a particular step. It cannot tell you why. For that, you need to talk to people.

The misread happens when organisations try to use qualitative research to do something it was never designed for: make statistically representative claims. I have seen focus group findings presented as if they were population-level truths. Eight people in a room in Manchester do not represent the views of your entire customer base. What they can do is surface hypotheses worth testing at scale, illuminate the language customers use to describe a problem, and reveal friction points that no amount of click data would have found.

Used correctly, qualitative research is generative. It gives you better questions to ask in your quantitative phase. It gives your creative teams the raw material to write briefs that actually resonate. It tells you what to look for in the behavioural data you are already collecting.

Tools like Hotjar Highlights sit in an interesting middle ground here. They allow you to clip and tag session recordings, surfacing moments of friction or confusion in a way that is more scalable than traditional usability testing but still qualitative in nature. They are not a replacement for talking to customers, but they are a useful complement when you are trying to move quickly.

Quantitative Research: Where Rigour Either Earns Its Keep or Wastes Everyone’s Time

Surveys are the most commonly used quantitative research tool in marketing, and also the most commonly misused. The problems are usually structural. Leading questions, poorly defined response scales, samples that do not reflect the target audience, and surveys so long that only the most engaged (or most bored) respondents complete them.

I worked with a client once who had been running a quarterly brand tracker for three years. It was a 40-question survey sent to their existing customer base. Every quarter it told them their customers were satisfied. Every quarter the results were presented to the board as evidence that brand health was strong. It took about ten minutes of questioning to establish that the survey was only reaching people who had already made a purchase and were engaged enough to open a marketing email. The people who had churned, or who had considered the brand and rejected it, were invisible to the research entirely. The data was accurate. The conclusions drawn from it were wrong.

Good quantitative research requires sample design that matches the question being asked. If you want to understand why people are not buying from you, you need to research non-customers and lapsed customers, not just your most loyal ones. If you want to understand brand perception in a market, your sample needs to reflect the market, not your email list.

The other discipline that gets skipped is statistical significance. Not every difference in a survey result is meaningful. A 3-point shift in a satisfaction score from one quarter to the next might be noise. Before you present research findings as evidence of anything, it is worth understanding the margin of error in your data.

Behavioural Data as a Research Strategy

One of the more significant shifts in market research over the past decade is the availability of behavioural data at scale. Web analytics, session recordings, heatmaps, search query data, social listening tools: all of these give you a window into what people actually do, as opposed to what they say they do when asked.

The gap between stated and actual behaviour is one of the most reliable findings in consumer psychology. People overestimate how rationally they make decisions. They underreport price sensitivity. They describe their media consumption habits inaccurately. Behavioural data cuts through this because it is observational rather than self-reported.

Early in my career, running paid search at scale, I noticed that the search queries coming through on a campaign often told a more honest story about customer intent than any survey could. The specific language people used, the combinations of terms, the questions they were asking, revealed anxieties and motivations that had never surfaced in focus groups. Search behaviour is competitive and revealing in ways that other data sources simply are not.

The limitation of behavioural data is that it tells you what happened, not why. Someone abandoned your checkout. Someone rage-clicked a button that was not functioning as they expected. Rage clicks are a reliable signal of frustration, but they do not tell you whether the frustration was caused by a technical problem, a confusing design, or a pricing decision that made the customer reconsider. For the “why,” you still need to talk to people.

The most effective research programmes combine both. Use behavioural data to identify where the problems are. Use qualitative methods to understand why they exist. Use quantitative surveys to measure the scale of the problem and track whether your interventions are working.

Competitive Research as a Standing Discipline

Most organisations treat competitive research as something you do when a new competitor appears or when you are preparing a strategy presentation. This is the wrong cadence. By the time a competitor has moved significantly enough to trigger a formal research exercise, you are already behind.

Competitive research works best as a standing discipline with a regular rhythm. Not a quarterly deep-dive that produces a 60-slide deck, but a lightweight ongoing process that flags meaningful changes in competitor positioning, messaging, product development, pricing, and media investment.

The signals worth watching are not always obvious. A competitor changing their homepage headline is a data point. A shift in the keywords they are bidding on is a data point. A new hire in their leadership team, visible on LinkedIn, is a data point. Individually, none of these tells you much. Tracked over time and read together, they often reveal strategic intent well before any formal announcement.

When I was building out the intelligence function at an agency I ran, we created a simple internal briefing, a single page, updated fortnightly, covering the three most significant competitive moves we had observed and what they implied for our clients. It was not sophisticated. It did not require expensive tools. What it required was someone with the discipline to look, the judgement to filter signal from noise, and the communication skills to make the implications clear. That combination is rarer than it should be.

Customer Segmentation Research: Getting Past Demographics

Demographic segmentation, age, gender, income, geography, is the starting point most organisations never move beyond. It is easy to produce, easy to present, and almost entirely useless for informing creative or messaging decisions.

Knowing that your customer base skews 35 to 54, female, and ABC1 does not tell you anything about what motivates them to buy, what makes them loyal, what would make them switch, or how they talk about the problem your product solves. For that, you need attitudinal and behavioural segmentation.

Attitudinal segmentation groups customers by what they believe, value, and want, rather than who they are demographically. Behavioural segmentation groups them by what they actually do: purchase frequency, category involvement, channel preference, response to promotions. Both are more commercially useful than demographics, and both require primary research to build properly.

The most actionable segmentation frameworks I have worked with combine all three layers. They use demographics as a descriptor, not a definer. They use attitudes and behaviours to identify the segments that matter commercially, and then they use the demographic profile of those segments to plan media and targeting. In that order, not the reverse.

How to Brief Research Properly

A good research brief is one of the most underrated documents in marketing. Most briefs are too long on background and too short on the specific decisions the research is meant to inform. A brief that runs to six pages of company history and market context but cannot clearly articulate what will change as a result of the research is not a brief. It is a document that protects the commissioner from accountability.

A well-structured research brief covers five things. First, the decision context: what specific decision is being made, and when does it need to be made? Second, the research question: what do you need to know to make that decision with more confidence? Third, what you already know: what existing data or insight is available, so the research does not duplicate it? Fourth, the audience: who specifically are you researching, and why? Fifth, constraints: budget, timeline, any methodological requirements or restrictions.

That last point matters more than people think. If a research agency knows upfront that the findings need to be presented to a board in six weeks, they will design accordingly. If they find out on week four, you will get either a rushed project or a missed deadline. Research has lead times. Respecting them is not a bureaucratic nicety. It is how you get usable outputs.

There is a lot more on building a systematic approach to market intelligence, from tool selection to programme design, in the market research section of The Marketing Juice. If you are putting together a research function or reviewing an existing one, it is a useful reference point.

Turning Research Into Decisions

The most common failure mode in market research is not bad data. It is good data that never gets acted on. Research gets commissioned, conducted, presented, and then absorbed into the background noise of the organisation without changing anything. This is more common than anyone in the research industry likes to admit.

Part of the problem is structural. Research findings are often presented to the people who commissioned them, who then have to translate those findings into recommendations for people with the authority to act. Each translation is an opportunity for the insight to get diluted, reframed, or quietly buried because it conflicts with an existing plan or a senior leader’s prior view.

The organisations that get the most value from research are the ones that involve decision-makers earlier in the process. Not just at the debrief, but in the briefing stage, where they can articulate what they are trying to decide, and in the analysis stage, where they can interrogate the data before it has been packaged into a presentation. When the people with authority to act have been part of the research process, the findings are harder to dismiss.

It also helps to be explicit about the implications. A research debrief that ends with “here is what we found” is only half a job. The other half is “here is what this means for the decisions we said we were trying to make.” That requires the research team, whether internal or external, to be commercially engaged enough to make that connection. Not all of them are. The ones who are are worth considerably more than those who are not.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the most important step in any market research strategy?
Defining the decision the research is meant to inform before you start. Research that cannot be connected to a specific commercial decision tends to produce interesting data and no action. The question to ask before commissioning any research is: what will we do differently depending on what we find?
When should you use qualitative research instead of quantitative?
Qualitative research is the right tool when you need to understand why something is happening, not just what is happening or how often. It is particularly useful for exploring customer motivations, identifying friction in the purchase experience, and generating hypotheses to test at scale. It is not designed to produce statistically representative findings, and treating it as if it does is one of the most common misuses of the method.
How often should competitive research be conducted?
Competitive research is most valuable as a standing discipline with a regular cadence, fortnightly or monthly, rather than a periodic deep-dive triggered by a specific event. By the time a competitor has moved significantly enough to prompt a formal research exercise, you are often already behind. A lightweight ongoing monitoring process that tracks messaging, media investment, product changes, and hiring patterns gives you earlier warning of strategic shifts.
What is the difference between attitudinal and behavioural segmentation?
Attitudinal segmentation groups customers by what they believe, value, and want. Behavioural segmentation groups them by what they actually do, including purchase frequency, channel preference, and response to promotions. Both are more commercially useful than demographic segmentation alone. The most effective segmentation frameworks use all three layers, with demographics as a descriptor rather than the primary organising principle.
Why does market research often fail to influence decisions?
The most common reason is that decision-makers are only involved at the debrief stage, after the research has already been conducted and packaged. When findings conflict with existing plans or senior assumptions, they are easy to dismiss. Organisations that get more value from research tend to involve decision-makers earlier, in the briefing and analysis stages, and are explicit about the implications of findings for the specific decisions the research was commissioned to inform.

Similar Posts