Business and Market Research: What Most Companies Get Wrong

Business and market research is the process of gathering and analysing information about your market, customers, and competitors to make better commercial decisions. Done well, it reduces the cost of being wrong. Done poorly, or not done at all, it turns strategy into expensive guesswork.

Most organisations treat research as something you do before a campaign launches or when a new product needs validating. The companies that use it well treat it as a continuous input into how the business thinks, not a one-time project that gets filed and forgotten.

Key Takeaways

  • Market research only earns its cost when it changes a decision. If the findings confirm what everyone already believed, the process was probably designed to do exactly that.
  • Primary and secondary research serve different purposes. Confusing them leads to expensive primary work when desk research would have been sufficient, and vice versa.
  • The most common failure in business research is not poor methodology, it is asking the wrong question in the first place. Framing the brief correctly is worth more than any research tool.
  • Customer behaviour data and customer opinion data frequently contradict each other. When they do, behaviour is almost always the more reliable signal.
  • Research findings without a named decision-maker and a deadline rarely change anything. Insight without accountability is just reading material.

Why Most Business Research Fails Before It Starts

I have sat in enough research debrief meetings to know how this usually goes. A business commissions a study, a research agency delivers a deck of 60 slides, everyone nods along, and three months later nothing has changed. The research budget gets renewed the following year, and the cycle repeats.

The problem is almost never the research itself. It is the question that was asked at the start. Most research briefs are written backwards. They start with a methodology (“we need a survey”) rather than a decision (“we need to know whether to enter this segment or not”). When the brief is written around a tool rather than a decision, the output tends to be descriptive rather than directional. You learn things, but you do not learn what to do.

When I was running an agency and we were deciding whether to build out a dedicated performance practice, we did not commission a formal study. We spent three weeks doing structured interviews with eight of our largest clients, talking to three potential hires who had left competitor agencies, and mapping out where our billings were already drifting in that direction. None of that was expensive. All of it was directional. The decision became obvious before we had spent a significant budget on anything.

That is the standard business research should be held to. Not “did we gather data?” but “did it change what we were going to do?”

If you want a broader view of how research fits into competitive strategy, the Market Research and Competitive Intel hub covers the full range of tools and approaches, from primary research through to digital intelligence.

What Is the Difference Between Primary and Secondary Research?

Primary research is data you collect yourself, directly from the source. Surveys, interviews, focus groups, ethnographic observation, usability testing. Secondary research is data that already exists, collected by someone else for their own purposes, that you analyse for yours. Industry reports, government statistics, academic papers, published case studies, competitor filings.

The distinction matters because they answer different questions and carry different costs. Secondary research is almost always cheaper and faster. If you want to understand the size of a market, the demographic profile of a sector, or the regulatory environment in a new geography, there is usually published data that gets you most of the way there. Starting with primary research before exhausting secondary sources is a common and expensive mistake.

Primary research earns its cost when you need information that does not exist anywhere else. How do your specific customers make purchasing decisions? What do people in your target segment actually think of your product versus the alternatives? What language do they use when they describe their problem? These are questions that require you to go and ask someone directly.

The combination of the two is where most effective research programmes sit. Secondary research sets the context and narrows the hypothesis. Primary research tests or deepens that hypothesis with real people. Running them in the wrong order, or treating them as interchangeable, wastes time and money.

Qualitative vs. Quantitative: Which One Should You Trust?

This is one of the more persistent debates in market research, and it tends to produce more heat than light. The honest answer is that neither is inherently more trustworthy. They answer different questions, and using the wrong one for the question you are asking produces misleading results regardless of how well the methodology is executed.

Qualitative research, interviews, focus groups, open-ended surveys, gives you depth and language. It tells you how people think about a problem, what words they use, what emotional territory surrounds a decision. It is generative. It helps you form hypotheses. It is poor at telling you how many people feel a particular way, or whether a pattern you observed in twelve interviews holds across a market of 50,000 buyers.

Quantitative research gives you scale and statistical confidence. A well-constructed survey of 500 respondents can tell you with reasonable certainty what proportion of your target market holds a particular view. It cannot tell you why they hold it, or what it would take to change it.

The mistake I see most often is using quantitative research to validate something that has not been properly explored qualitatively first. You end up measuring the wrong things with great precision. I have seen clients spend significant budget on large-scale brand tracking studies that were measuring brand attributes nobody had actually verified were relevant to purchase decisions. The data was statistically strong. It was also largely useless.

Tools like Hotjar’s popup surveys sit in an interesting middle ground here. They are quantitative in scale but can be designed to capture qualitative language if the question design is thoughtful. They also capture responses in the moment, when someone is actually on your site making a decision, rather than retrospectively in a research setting. That context matters more than most people give it credit for.

What Does Good Market Research Actually Look Like in Practice?

Good market research starts with a decision that needs to be made, not a budget that needs to be spent. The brief should name the decision explicitly: “We are considering entering the SME accounting software market. We need to understand whether there is a viable segment underserved by the current options, and if so, what the acquisition economics might look like.” That is a brief you can build a research programme around. “We want to understand the market” is not.

From that decision, you work backwards to the information you need to make it. What do you already know? What can you find from secondary sources? What can only be answered by going directly to the market? That process usually produces a much smaller and more focused research programme than the default approach of commissioning a comprehensive study.

The other thing that separates good research from average research is how findings are translated into recommendations. Most research reports describe what was found. Good research translates findings into options and implications. “Forty per cent of respondents said price was their primary switching trigger” is a finding. “This suggests a price-led acquisition strategy would have a larger addressable pool than a features-led one, but the switching cost data suggests retention would need to be addressed separately” is an implication. The second version is what gets used.

Accountability is the final piece. Research findings need a named owner, a decision they are attached to, and a timeline. Without those three things, the most insightful research in the world tends to sit in a shared drive and age quietly.

How Do You Research Customer Behaviour Without Relying on What Customers Say?

One of the more uncomfortable truths in market research is that what customers say they do and what they actually do are frequently different. Not because they are being dishonest, but because human beings are genuinely poor at reconstructing their own decision-making processes. Ask someone why they bought a product and they will give you a plausible, logical account that may bear little resemblance to the actual sequence of events that led to the purchase.

This is why behavioural data, when you can get it, is almost always more reliable than attitudinal data. What pages did people visit before converting? Where did they drop off? What search terms brought them to the site? What did they click on and what did they ignore? These are records of actual behaviour, not reconstructions of it.

I spent a period working with a retail client who had commissioned a significant piece of customer satisfaction research showing very high satisfaction scores across their in-store experience. The behavioural data from their loyalty programme told a different story: repeat purchase rates were declining, basket sizes were shrinking, and the customers who said they were most satisfied were also the ones least likely to return within 90 days. The attitudinal research was not wrong exactly. It was just measuring something that turned out not to be very connected to commercial outcomes.

The most useful research programmes combine both. Behavioural data tells you what is happening. Qualitative research helps you understand why. Using one without the other leaves you with either patterns you cannot explain or explanations you cannot verify.

What Role Does Desk Research Play in a Modern Research Programme?

Desk research, the systematic gathering and analysis of existing published information, is consistently undervalued. It is less glamorous than primary research and does not generate the same kind of proprietary insight, but it is faster, cheaper, and often more comprehensive than people expect.

A well-executed desk research phase can tell you market size and growth trajectory, the competitive landscape and how it has shifted, regulatory changes that affect the category, consumer trend data from published surveys and panel studies, and how competitors are positioning themselves based on their own published communications and filings.

Analyst firms like Forrester publish regular category analyses that can give you a credible external view of a market without the cost of commissioning bespoke research. Industry associations, trade publications, and government statistical bodies are similarly underused. Most of the context you need to frame a strategic question is already out there.

The discipline in desk research is knowing when to stop. It is easy to spend weeks gathering secondary data and developing a sophisticated picture of a market that still does not answer the specific question you started with. Desk research should have a defined scope and a defined output, just like any other research activity.

How Should Small Businesses Approach Market Research Without Large Budgets?

The assumption that market research requires a significant budget is one of the more damaging myths in marketing. The most useful research I have seen small businesses do has cost almost nothing. What it required was discipline and honesty.

Talking directly to customers, not in a formal research setting but in genuine conversations about their experience, their alternatives, and what would make them more likely to recommend you, produces insight that no survey tool can match. Ten honest conversations with real customers will tell you more about your positioning than a 200-respondent survey designed to confirm what you already think.

Search data is free and behavioural. What people type into Google when they are looking for what you sell is one of the clearest windows into actual demand you can access without spending anything. The language people use in search queries, the questions they ask, the alternatives they consider alongside you, all of that is visible in keyword tools and search console data without any research budget at all.

Social listening, reading what people say in communities, forums, and comment sections about your category, is similarly free and often more candid than anything you would get in a formal research setting. People say things in public forums that they would moderate significantly if they knew they were being studied.

The constraint for small businesses is not budget. It is time and the discipline to act on what they find. A founder who spends two hours a week genuinely listening to their market will make better decisions than one who commissions an annual survey and ignores it.

When Is Market Research a Waste of Money?

This is the question that rarely gets asked but probably should be asked more often. Research is not always the right answer. There are situations where it adds genuine value, and situations where it is an expensive way of delaying a decision that could be made more cheaply.

Research is a waste of money when the decision has already been made and the research is being commissioned to provide cover for it. I have seen this more times than I can count. A senior leader wants to enter a new market or launch a new product. The research is commissioned not to test the idea but to validate it. The brief is written in a way that makes it difficult for the research to come back negative. The findings confirm the hypothesis. The project proceeds. Sometimes it works. Often it does not, and the research is quietly forgotten.

Research is also a waste of money when the cost of being wrong is lower than the cost of the research itself. If you are testing a small campaign or a new piece of content, running a formal research programme to validate the approach before you launch is almost certainly disproportionate. Launch it, measure what happens, and adjust. The market will tell you more quickly and cheaply than any pre-launch study.

And research is a waste of money when the organisation is not capable of acting on the findings. If the decision-making process does not have a mechanism for incorporating research outputs, if the findings will sit in a presentation that nobody revisits, then the budget would be better spent elsewhere. Good research requires an organisation that is genuinely prepared to be surprised by what it finds and to change course accordingly.

The broader question of how research connects to competitive positioning, and what a properly structured intelligence programme looks like, is covered in more depth across the Market Research and Competitive Intel section of The Marketing Juice. If you are building out a research capability rather than commissioning one-off studies, the articles there cover the practical architecture in detail.

How Do You Connect Research to Commercial Outcomes?

This is where most research programmes fall down, not in the quality of the research itself, but in the gap between insight and action. Research that does not change a decision or improve a commercial outcome has not earned its cost, regardless of how methodologically sound it was.

The connection between research and outcomes starts in the brief. If the brief names a decision and a decision-maker, the research has a natural endpoint. Someone is accountable for taking the findings and doing something with them. If the brief is vague, the research tends to produce general insight that nobody is specifically responsible for acting on.

Beyond the brief, the translation of findings into commercial language matters. Research that speaks in the language of marketing, awareness, perception, consideration, tends to stay in marketing. Research that speaks in the language of the business, revenue potential, customer acquisition cost, retention risk, churn drivers, gets into conversations that change how the business operates.

I have found that the most effective research outputs are not long reports. They are short, structured summaries of what was found, what it implies for the decision at hand, and what the options are given what was learned. Three pages that answer those questions clearly will get more traction than a 60-slide deck that documents everything the research team did.

The final test is retrospective. After the decision has been made and the programme has run, did the research improve the outcome? Did it help you avoid a mistake? Did it identify an opportunity you would otherwise have missed? If you cannot answer yes to at least one of those questions, the research programme needs redesigning, not just repeating.

What Are the Most Common Mistakes in Business Market Research?

Asking leading questions is the most common methodological failure. Survey questions that suggest a preferred answer, or that frame a choice in a way that makes one option obviously more socially acceptable than another, produce data that reflects the question design rather than the respondent’s actual views. This is a craft problem. Good questionnaire design is harder than it looks, and cutting corners on it corrupts the entire dataset.

Sampling errors are the second most common failure. Surveying your existing customers to understand the broader market is a persistent mistake. Your existing customers are not a representative sample of your potential market. They have already chosen you, which means they are systematically different from the people who chose someone else or nobody at all. If you want to understand why people are not buying from you, you need to talk to people who are not buying from you.

Over-reliance on self-reported data is the third. As discussed earlier, what people say they do and what they actually do diverge significantly, particularly for decisions that involve social desirability bias. People overreport how much they read, how healthy their habits are, how much they care about sustainability, and how rationally they make purchasing decisions. Designing research that relies entirely on self-report, without any behavioural validation, builds on shaky foundations.

The fourth mistake is treating research as a one-time event rather than a continuous input. Markets change. Customer needs evolve. A segmentation study from three years ago may be directionally useful but should not be treated as current truth. Organisations that build research into their operating rhythm, rather than commissioning it episodically when a decision forces the issue, consistently make better strategic choices.

Good content strategy faces a similar discipline problem. Copyblogger’s long-running work on audience understanding makes a related point about the gap between what creators assume their audience wants and what the audience actually values. The principle applies equally to market research: you have to be willing to find out you were wrong.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between market research and business research?
Market research focuses specifically on understanding customers, competitors, and market conditions. Business research is broader and covers any systematic information gathering that supports a business decision, including financial analysis, operational research, and supply chain intelligence. In practice, the terms are often used interchangeably, but market research is a subset of business research rather than a synonym for it.
How much should a business spend on market research?
There is no universal benchmark, and percentage-of-revenue rules are largely meaningless because the value of research depends entirely on the decision it is informing. A better framing is: what is the cost of making this decision without good information? If a wrong decision costs significantly more than the research, the research is worth commissioning. If the decision is low-stakes or easily reversible, cheaper and faster methods, including simply testing in market, are usually more appropriate.
What is the best way to conduct market research with a limited budget?
Start with secondary research before spending anything on primary. Published industry data, competitor communications, search query data, and social listening can answer a significant proportion of most strategic questions at little or no cost. When primary research is necessary, structured customer interviews are typically more valuable per pound spent than surveys, particularly in the early stages of a strategic question where you are still forming hypotheses rather than testing them.
How do you know if your market research is reliable?
Reliability in research comes from three things: a representative sample, unbiased question design, and consistent methodology. The most common reliability failures are sampling from an unrepresentative group (such as surveying only existing customers), asking leading questions that suggest a preferred answer, and drawing conclusions from sample sizes too small to support them. Cross-validating attitudinal data against behavioural data, where possible, is one of the most effective reliability checks available.
When should a business use a research agency versus doing research in-house?
Research agencies add most value when the methodology is complex, when an independent perspective is commercially important (such as when findings will be used externally or to support investment decisions), or when the in-house team lacks the specific expertise required. In-house research is often faster, cheaper, and produces more actionable output for operational decisions where the team already has deep context. The choice should be driven by the nature of the question, not by habit or the assumption that external always means more credible.

Similar Posts