Market Research: What It Actually Tells You and What It Doesn’t

Market research is the process of gathering and interpreting information about your customers, competitors, and market conditions to support better business decisions. Done well, it reduces the risk of acting on assumptions. Done poorly, it gives those assumptions a veneer of credibility and makes bad decisions harder to challenge.

Most companies do some version of it. Far fewer use it in a way that changes what they actually do.

Key Takeaways

  • Market research reduces decision risk, but only when the questions being asked are the right ones. Garbage in, garbage out applies here more than almost anywhere else in marketing.
  • Primary and secondary research serve different purposes. Treating them as interchangeable is one of the most common and costly mistakes in the planning process.
  • Customers rarely tell you what they want. They tell you what they think you want to hear, or what they can articulate. Behavioural data is often more honest than stated preferences.
  • Research findings should challenge your strategy, not validate it. If every piece of research confirms what you already believed, you are probably asking the wrong questions.
  • The gap between insight and action is where most research investment is wasted. A finding no one acts on is just an expensive document.

Why Most Market Research Fails Before It Starts

I have sat in more research debriefs than I can count. Some of them were genuinely useful. Many of them confirmed what the client already believed, wrapped in a slide deck with confidence intervals and a hefty invoice attached. The problem was almost never the methodology. It was the brief.

When a business commissions research to validate a decision that has already been made, the research is not really research. It is theatre. The questions are shaped to produce the answers leadership wants to hear. The sample is chosen to reflect the most favourable audience. The findings are presented in a way that supports the narrative rather than interrogating it.

I have seen this play out in agency pitches, in brand strategy projects, and in product launches. A company spends six figures on research that tells them their new product concept is well-received by consumers. They launch. It fails. Then someone quietly goes back to the research and finds that the question was phrased in a way that made it almost impossible to say no, the sample skewed young and optimistic, and the competitive context was never tested at all.

Good market research starts with intellectual honesty. That means being willing to hear that your instinct was wrong, that your target audience is not who you think it is, or that the market you want to enter is not as attractive as you assumed. If you are not prepared for that conversation, the research will not help you.

Understanding this sits at the heart of sound marketing fundamentals. Research is not a standalone activity. It is part of a broader discipline of making decisions with better information than your competitors have.

Primary vs Secondary Research: The Distinction That Actually Matters

There are two types of market research, and they are not interchangeable. Primary research is data you collect yourself, directly from the source. Secondary research is data that already exists, collected by someone else for a different purpose.

Both are valuable. Both have significant limitations. The mistake is treating one as a substitute for the other.

Primary research gives you specificity. You can ask exactly the questions your business needs answered, from exactly the audience you care about. A customer survey, a series of depth interviews, an ethnographic study of how people use your product at home. These are things no secondary source can give you, because no one else had your specific question.

The limitation of primary research is cost and time. It is slow to set up, expensive to do properly, and easy to do badly. A poorly designed survey is worse than no survey at all, because it produces misleading data with the appearance of rigour.

Secondary research is faster and cheaper. Industry reports, government data, competitor analysis, academic literature. It gives you the lay of the land. It tells you how big a market is, how it has grown, what the major forces are shaping it. What it cannot tell you is how your specific customers think, what they want from you specifically, or why they chose a competitor last month.

The most effective research programmes combine both. Secondary research to understand the market context. Primary research to understand your customer within that context. Most businesses, in my experience, do one or the other and call it done.

If you want a deeper look at the specific methods used in each type, the breakdown of techniques of market survey is worth reading alongside this. The mechanics matter, but they only matter once the strategic purpose is clear.

What Customers Tell You and What They Actually Mean

There is a well-worn problem in qualitative research: people are not reliable narrators of their own behaviour. They tell you what they think is true. They tell you what sounds reasonable. They tell you what they imagine a sensible, rational person would do. And then they go and do something entirely different.

I remember a project early in my career where a client had done extensive focus group research before launching a new service. Participants said they would pay a premium for it. They said it solved a real problem. They said they would recommend it to colleagues. The launch was underwhelming. When we dug into the post-launch data, the issue was not the product. It was that the problem it solved was not as painful as people had claimed in the groups. It was a mild inconvenience, not a genuine friction point. People had been polite. They had been helpful. They had not been honest, not because they were lying, but because they did not know.

This is why behavioural data is often more valuable than stated preference data. What people actually do, the pages they visit, the products they buy, the searches they run, the emails they open, tells you more than what they say they would do in a hypothetical scenario.

Tools like Hotjar give you a window into real user behaviour on your site: where people click, where they stop reading, where they abandon a form. That kind of observational data does not lie. It does not try to be helpful. It just shows you what happened.

The implication for research design is significant. If you are relying purely on what customers tell you they want, you are working with one hand tied behind your back. The best research programmes triangulate. They combine what people say with what people do, and they look for the gaps between the two. Those gaps are often where the most useful insights live.

Defining Your Target Audience Before You Start Researching

One of the most common errors I see is businesses conducting market research without a clear enough definition of who they are researching. The sample ends up too broad, the findings are averaged across people with very different needs and contexts, and the output is a set of insights that apply to everyone and therefore help no one.

Market research is only as useful as the clarity of the question it is answering. And the most important question is usually: who exactly are we trying to understand?

This is not just a research problem. It is a strategy problem. If you have not done the work to define your target audience with real specificity, your research will reflect that vagueness back at you. You will end up with findings that describe the average of a group that does not really exist, rather than the specific people whose behaviour you need to understand and influence.

When I was running agencies and we took on a new client, one of the first things we would do is interrogate their existing customer data before we designed any research. Who is actually buying from you right now? Not who you think is buying, not who you want to be buying, but who is actually converting, spending money, and coming back. Sometimes that analysis alone produced more useful insight than a formal research programme. It told you where the business was actually winning, which is often different from where leadership thought it was winning.

The audience definition work also determines which research methods are appropriate. If your target audience is senior procurement managers at enterprise companies, you are not going to reach them through an online survey panel. If your audience is 18-to-24-year-olds who discovered you through social media, depth interviews over the phone are probably not the right format. Method and audience have to match.

Competitive Intelligence: The Research Most Businesses Underinvest In

Most market research programmes focus inward. They ask customers about their own experience with your brand, your products, your service. That is useful. But it is only half the picture.

Understanding your competitors, what they are doing, how they are positioning, what customers think of them, where they are winning and losing, is equally important. And in my experience, it is the research that gets cut first when budgets are tight.

Competitive intelligence does not have to be expensive. A significant amount of it is available through secondary sources. Competitor websites, product reviews, job postings (which tell you what capabilities they are building), pricing pages, case studies, and press releases all contain useful signal. Market penetration analysis can tell you how saturated a market already is and where the whitespace might be. Pricing strategy research can reveal where competitors are competing on cost versus value.

The more sophisticated version involves primary research. Win/loss interviews, where you speak to prospects who chose a competitor instead of you, are among the most valuable research you can do. They are also among the most uncomfortable, which is probably why so few companies do them systematically. Nobody enjoys being told why they lost. But that discomfort is precisely what makes the information valuable.

I judged the Effie Awards for a period, and what struck me about the work that performed well over time was how clearly the winners understood their competitive context. They were not just good at understanding their own customers. They understood the full landscape their customers were handling, including the alternatives those customers were considering. That understanding shaped everything from positioning to channel selection to messaging.

A SWOT analysis is often the practical output of this kind of competitive and market research. It is not a sophisticated tool, but it forces you to be honest about where you are strong, where you are exposed, and what the external environment is doing. The problem is that most SWOT analyses are done in a workshop with no research to underpin them. They end up reflecting the opinions of whoever is loudest in the room, rather than actual market evidence.

The Role of Market Research in Go-To-Market Planning

A go-to-market strategy without research behind it is a set of assumptions dressed up as a plan. That might sound harsh, but I have seen enough go-to-market launches fail on assumptions that could have been tested cheaply and quickly to believe it.

Research should inform every major element of a go-to-market plan. Who you are targeting and why. How you are positioning relative to alternatives. Which channels your audience actually uses. What messaging will resonate and what will fall flat. What price point the market will bear. These are not questions you can answer from the boardroom.

The BCG work on understanding the financial needs of an evolving population is a useful illustration of this. Even in a sector as data-rich as financial services, the firms that outperformed were the ones that invested in understanding how customer needs were shifting, rather than assuming yesterday’s customer was tomorrow’s customer. The market changes. Research tells you how.

Earlier in my career, I placed too much weight on lower-funnel performance signals when planning go-to-market activity. Click-through rates, conversion rates, cost per acquisition. These numbers are real, but they only tell you about the people who were already in market. They tell you nothing about the much larger group who were not yet considering you, the people you need to reach if you want to grow rather than just harvest existing demand.

Market research, particularly the kind that reaches beyond your existing customer base, is what gives you a view of that larger opportunity. It tells you who is out there, what they care about, and what it would take to bring them into your orbit. That is the growth question. And it is the question that performance data, on its own, cannot answer.

The broader thinking on this sits across the Go-To-Market and Growth Strategy Hub, which covers the strategic context that market research needs to serve. Research without a strategic purpose is just data collection. It needs to be connected to a question that actually matters to the business.

Quantitative vs Qualitative: Knowing Which One to Trust

Quantitative research gives you scale and statistical confidence. Qualitative research gives you depth and nuance. Neither is inherently superior. The question is which one answers the question you are actually asking.

If you want to know what percentage of your customers are satisfied with your service, you need quantitative research. A survey of a properly representative sample will give you a number you can track over time and compare against benchmarks. That is the right tool for that question.

If you want to understand why customers who cancel your service do so, you need qualitative research. A survey will give you a list of reasons in order of frequency. A series of depth interviews will give you the texture behind those reasons: the moment the decision crystallised, the alternative they moved to, the thing you could have done differently. That texture is what makes the insight actionable.

The mistake I see most often is using quantitative research to answer qualitative questions. You run a survey and ask people to rate their satisfaction on a scale of one to ten. You get a number. But you have no idea what drove that number, what it means in terms of future behaviour, or what you would need to change to move it. The number is not the insight. The number is just the starting point for the real investigation.

Good research programmes use quantitative data to identify where to focus and qualitative research to understand what is happening there. They are complementary, not competing.

When Research Confirms What You Already Know

There is a version of market research that is essentially expensive confirmation. The business already knows what it wants to do. The research is commissioned to provide cover for that decision. The findings, predictably, support the predetermined conclusion. Everyone nods. The project proceeds.

I have seen this happen at every level, from small businesses testing a new product concept to large organisations evaluating a market entry. The research is not really being used to make a decision. It is being used to defend a decision that has already been made.

This is not always cynical. Sometimes it is just human nature. We form views quickly. We find evidence that supports those views. We discount evidence that challenges them. Research is not immune to this. The people commissioning it, designing it, and interpreting it all bring their own biases to the process.

The antidote is to build challenge into the process explicitly. Before you commission research, write down what findings would cause you to change course. What would the data have to show for you to decide not to proceed? If you cannot answer that question, the research is probably not going to change anything, and you should ask whether it is worth doing at all.

I have found that the most valuable research findings are the uncomfortable ones. The insight that your brand is perceived very differently by your target audience than you believed. The discovery that the feature you thought was your strongest differentiator is not what customers actually care about. The finding that a competitor you dismissed is being considered seriously by prospects you thought were yours. These are the findings that change strategy. And they are the ones that tend to get buried in the appendix.

Research and the Fundamental Marketing Question

There is a version of marketing that treats itself as the solution to every business problem. Sales are down? Run more ads. Customer retention is falling? Increase email frequency. Brand awareness is low? Buy more media.

Market research, when it is done honestly, often reveals that the problem is not a marketing problem at all. The product is not good enough. The service experience is letting people down. The pricing is misaligned with perceived value. The distribution model is wrong for the audience.

I have spent time in turnaround situations where the instinct was to throw marketing budget at a declining business. And sometimes that is the right call. But more often, the research tells a different story. Customers are leaving because of something the business is doing, or failing to do, that marketing cannot fix. If a company genuinely delighted its customers at every interaction, at every touchpoint, in every transaction, the growth problem would largely take care of itself. Marketing is often a blunt instrument being used to compensate for a more fundamental gap.

Forrester’s work on intelligent growth models makes this point in a different way: sustainable growth comes from understanding where value is genuinely being created, not from amplifying activity across the board. Research is what tells you where the value is and where it is not.

This is one of the reasons I am sceptical of marketing plans that are built before any research has been done. The plan might be beautifully structured. The channels might be well-chosen. The creative brief might be sharp. But if the underlying assumptions about the customer, the competitive context, and the market opportunity have not been tested, the plan is built on sand.

How to Turn Research Findings into Decisions

The gap between research findings and business decisions is where most research investment is wasted. A debrief happens. Insights are shared. People say the right things about how interesting it is. And then the strategy proceeds largely as planned, because no one has been given clear responsibility for translating the findings into specific changes.

The problem is structural as much as it is cultural. Research is often commissioned by one team and the decisions are made by another. The people who designed the research questions are not always the people who need to act on the answers. The findings get lost in translation, or diluted as they pass through layers of interpretation.

The most effective research processes I have been part of are the ones where the decision-makers are involved from the start, not just at the debrief. They help shape the questions. They sit in on some of the interviews or focus groups. They see the raw data, not just the polished summary. When you have heard a customer describe a problem in their own words, it is much harder to dismiss it than when you read a bullet point in a slide deck that says “customers report friction in the onboarding process.”

After the debrief, the output should not be a report. It should be a set of specific decisions or questions that need to be resolved. Which assumptions has the research confirmed? Which has it challenged? What do we now need to do differently? Who is responsible for each change? When will we know if it has worked?

Research without accountability is just documentation. The value is in what changes as a result.

Market Research and Digital Strategy

Digital channels have made certain kinds of market research dramatically easier. You can run a survey to a targeted sample within hours. You can A/B test a message in market and let real behaviour tell you which version works better. You can analyse search data to understand what questions your audience is asking and what language they use to describe their problems.

But digital has also created a false sense of confidence. Because the data is abundant and immediate, it can feel like you know more than you do. Click-through rates and engagement metrics are real, but they are proxies for understanding, not understanding itself. They tell you what happened, not why. They tell you which ad performed better, not what the audience actually thinks about your brand.

The pipeline and revenue research from Vidyard highlights how much potential revenue goes unrealised because go-to-market teams are not reaching the right people with the right information at the right time. That is a research problem as much as it is an execution problem. You cannot reach the right people if you do not know who they are and what they need.

When building a digital strategy, research should inform channel selection, not follow it. The question is not “should we be on Instagram?” The question is “where does our target audience spend their time, what are they looking for when they get there, and can we reach them in a way that is genuinely relevant?” Those are research questions. And if you cannot answer them with evidence, you are making channel decisions based on trend rather than insight.

This connects directly to the work of building a digital marketing strategy from scratch. Research is not the first step you do before the strategy. It is woven through every stage of the strategy’s development, from audience definition to channel selection to message testing to performance evaluation.

The Limits of Research: What It Cannot Tell You

Market research is a tool, not an oracle. It has real limits, and understanding those limits is as important as knowing how to use it well.

Research can tell you about the present and the recent past. It cannot tell you about the future. Consumer preferences shift. Technology changes what is possible. Competitors do unexpected things. A research finding that was accurate twelve months ago may be meaningless today. This is particularly true in fast-moving categories, where the landscape can change significantly between when the research is conducted and when the strategy it informed is actually in market.

Research also cannot tell you what to do. It can tell you what is true. It can tell you what customers think, what the competitive landscape looks like, and what the market opportunity might be. But the decision about what to do with that information is a human judgement, shaped by the organisation’s capabilities, risk appetite, and strategic priorities. Research informs that judgement. It does not replace it.

There is also the question of what research cannot reach. Truly novel innovations, things that do not yet exist in a form that customers have experienced, are notoriously difficult to research. Customers cannot tell you they want something they have never imagined. The classic failure mode here is using research to evaluate a genuinely new idea against existing expectations, and concluding the idea is bad because it does not match what people are used to. That is a misuse of research, not a research failure.

Understanding these limits does not mean doing less research. It means being more precise about what you are asking research to do, and honest about where other forms of judgement need to take over.

Growth Strategies That Research Actually Supports

Market research is not just a planning tool. It is a growth tool. The businesses that use research most effectively tend to grow more consistently, because they are making decisions based on what is actually true about their market rather than what they hope is true.

Research supports growth in several specific ways. It identifies underserved segments, groups of customers whose needs are not being well met by existing solutions, which is often where the most attractive growth opportunities sit. It reveals the messaging that actually moves people, as opposed to the messaging that sounds good in an internal presentation. It exposes the barriers to purchase that are preventing conversion, which is often more valuable than finding new ways to drive traffic.

The BCG work on long-tail pricing in B2B markets is an example of how research-driven insight can reshape a go-to-market approach entirely. Pricing is one of the most powerful levers in marketing, and it is also one of the least researched. Most businesses set prices based on cost-plus logic or competitive benchmarking, not on a genuine understanding of what their customers value and what they would pay for it.

Growth hacking, for all the noise around it, is essentially about rapid experimentation informed by customer insight. The tools support the process, but the process starts with a question about what customers want and what barriers exist between them and the product. Growth hacking tools can accelerate the testing cycle, but they cannot replace the foundational research that tells you what to test.

When I grew an agency from around 20 people to over 100, the most important research we did was not about the market in the abstract. It was about our clients: what they valued, what they were frustrated by, where they felt underserved. That research shaped how we built the business, which services we prioritised, which capabilities we hired for, and how we positioned against competitors. It was not sophisticated. But it was honest. And it drove decisions that actually changed the trajectory of the business.

Some of the most effective growth strategies also involve reaching audiences who have not yet heard of you, rather than just optimising for the people who already have. This is where research into broader market segments, including people who are not yet customers, becomes critical. The Forrester research on go-to-market struggles in complex sectors illustrates how even well-resourced organisations fail when they do not understand the full picture of who they need to reach and what those people need to hear.

Research also plays a role in understanding which growth strategies are realistic for your specific situation. Not every business can execute a viral campaign. Not every product lends itself to referral-driven growth. Understanding the mechanics of viral marketing strategies is only useful if the research tells you your audience and product are actually suited to that kind of growth mechanism. Copying a strategy that worked for someone else, without understanding whether the underlying conditions apply to you, is one of the most common and expensive mistakes in marketing.

The full strategic context for how research connects to growth planning sits across the Go-To-Market and Growth Strategy Hub. If you are using research to inform a specific growth initiative, the frameworks there give you the structure to connect the insight to the execution.

Building a Research Habit, Not Just a Research Project

The businesses that use market research most effectively treat it as an ongoing discipline rather than a one-off project. They have mechanisms for continuously gathering customer feedback. They review competitive intelligence regularly. They track how market conditions are shifting and update their assumptions accordingly.

This does not require a large budget. It requires a commitment to staying curious about what is actually happening in your market, as opposed to what you assumed was happening when you last checked.

In practice, this means building research into the rhythm of the business. Quarterly customer interviews. Regular analysis of behavioural data. Annual competitive reviews. Systematic collection of win/loss data from the sales team. None of these things are expensive. All of them are valuable. And together, they create an organisation that is genuinely connected to its market rather than operating on outdated assumptions.

The organisations I have seen struggle most with market research are the ones that treat it as something you do before a big launch and then put away. Markets do not stand still. Customers change. Competitors move. The insight that was accurate eighteen months ago may be actively misleading you today. Research is not a snapshot you take once and frame on the wall. It is a practice.

Building that practice requires leadership that is genuinely curious, willing to be challenged, and interested in what is true rather than what is comfortable. That is a cultural question as much as a methodological one. And it is, in my experience, the most important factor in whether market research actually changes anything.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between primary and secondary market research?
Primary research is data you collect yourself, directly from customers, prospects, or the market, through surveys, interviews, focus groups, or observational methods. Secondary research is data that already exists, collected by others for a different purpose, such as industry reports, government statistics, or published academic work. Primary research gives you specificity and relevance to your exact question. Secondary research gives you context and scale. Effective research programmes use both.
How much should a business spend on market research?
There is no universal answer, but the more useful question is: what is the cost of making a major decision without adequate information? For most businesses, the cost of a failed product launch, a misaligned go-to-market strategy, or a poorly timed market entry far exceeds the cost of the research that could have prevented it. Research budgets should be proportional to the size of the decision being made, not set as a fixed percentage of marketing spend.
Can small businesses do market research without a big budget?
Yes. Many of the most valuable research activities cost very little. Customer interviews can be conducted over the phone or video call. Online survey tools are inexpensive. Competitor analysis can be done through publicly available information. Behavioural data from your own website and sales records is free and often underused. The constraint for most small businesses is not budget but discipline: the commitment to ask the questions and act on the answers.
How do you know if your market research is reliable?
Reliability depends on several factors: whether the sample is representative of the audience you care about, whether the questions are neutral and not leading, whether the method matches the question being asked, and whether the findings have been triangulated against other sources of evidence. Single-source research is always more vulnerable to error than research that combines multiple methods and data types. If your findings confirm everything you already believed, that is a signal to probe further, not to celebrate.
What is the most common mistake businesses make with market research?
Commissioning research to validate a decision that has already been made, rather than to genuinely inform it. When the questions are designed to produce a particular answer, the research is not really research. It is confirmation bias with a methodology attached. The second most common mistake is gathering findings and then not acting on them. Research that sits in a presentation and changes nothing has produced no return on the investment made in it.

Similar Posts