Product Research That Shapes What You Build

Conducting research to identify products that would satisfy customers means gathering structured insight about unmet needs, purchase drivers, and usage behaviour before committing resources to development or launch. Done properly, it closes the gap between what a business assumes customers want and what customers will actually pay for.

Most companies skip this step, or do a version of it that confirms what they already believe. The result is a product that makes sense internally but lands flat externally, and a marketing team left trying to generate demand for something the market never asked for.

Key Takeaways

  • Product research should surface unmet needs, not validate existing assumptions. If your research only confirms what you already believe, it is not research, it is theatre.
  • Qualitative and quantitative methods answer different questions. You need both to understand what customers want and whether enough of them want it to matter commercially.
  • The most useful insight often comes from studying what customers do, not what they say. Behaviour is more reliable than stated preference.
  • Research that never reaches a decision-maker is wasted. The output has to connect directly to a product, pricing, or positioning decision.
  • Satisfaction is not a fixed target. What customers find acceptable today will be the floor tomorrow. Research needs to be ongoing, not a one-time exercise.

Why Most Product Research Fails Before It Starts

I have sat in enough briefing rooms to recognise the pattern. A business wants to launch a new product or extend a line. Someone commissions research. A few focus groups run, a survey goes out, the results come back broadly positive, and the product launches. Six months later, sales are below forecast and the post-mortem focuses on the media plan rather than the product itself.

The research failed not because the methodology was wrong but because the brief was wrong. The question asked was “will people like this?” rather than “what problem does this solve and is that problem worth solving?” Those are very different questions, and they produce very different outputs.

When I was running an agency and we worked with clients on product launches, the ones that performed best were almost always the ones where the client had done genuine need-state mapping before we got anywhere near a campaign. The ones that underperformed were almost always cases where marketing was being asked to compensate for a product that had never been properly stress-tested against real customer behaviour.

If you want to understand the broader discipline that product research sits within, the market research hub at The Marketing Juice covers the full range of methods and frameworks that connect customer insight to commercial decisions.

What Are You Actually Trying to Find Out?

Before choosing a research method, you need to be precise about the question. “What products would satisfy our customers?” is not a research question. It is a starting point for writing one.

Useful product research questions tend to fall into four categories. First, need identification: what problems do customers currently experience that no product adequately solves? Second, solution validation: does this proposed product address those problems in a way that is meaningfully better than existing alternatives? Third, preference mapping: among a set of viable solutions, which attributes matter most to which customer segments? Fourth, adoption barriers: what would stop someone from switching to this product even if they found it appealing?

Each of these questions requires a different method. Running a single survey and expecting it to answer all four is how you end up with data that looks comprehensive but tells you almost nothing actionable.

Qualitative Methods: Where You Find the Real Signal

Qualitative research is where you discover things you did not know to ask about. It is exploratory by nature, and its value is in surfacing the language, frustrations, and mental models customers actually use, rather than the ones you project onto them.

In-depth interviews remain the most reliable qualitative tool for product research. A well-run interview with ten to fifteen customers who represent your target segment will give you more usable insight than a survey of five hundred people who are only half-engaged. what matters is asking about behaviour and experience rather than opinion. “Walk me through the last time you tried to solve this problem” will give you more than “how important is this feature to you?”

Ethnographic observation, where you watch customers in their actual environment rather than a research setting, adds another layer. People behave differently when observed in context versus when asked hypothetically. A customer might tell you in a focus group that they would happily pay a premium for a more sustainable option. Watch them at the shelf and you will often see a different story. Behaviour is the more honest data point.

Focus groups have their place but they are widely misused. They are good for exploring reactions to concepts and generating hypotheses. They are poor for measuring anything, because group dynamics distort individual responses. If your research plan relies heavily on focus groups, treat the output as directional rather than conclusive.

Quantitative Methods: Where You Test Whether the Signal Is Real

Once qualitative research has given you a hypothesis, quantitative methods let you test whether it holds at scale. The two most useful tools for product research are surveys with conjoint analysis and concept testing.

Conjoint analysis is worth understanding properly if you have not used it. Rather than asking customers to rate features in isolation, conjoint presents them with trade-off scenarios. Which would you prefer: a product at this price with these features, or a product at that price with those features? Repeated across enough combinations, this reveals the relative weight customers place on each attribute without them having to articulate it directly. It is far more predictive of actual purchase behaviour than asking people to rank features on a scale of one to ten.

Concept testing works best when you are comparing two or three distinct product directions rather than asking for a verdict on a single concept. A single concept test almost always produces positive results because people are reluctant to say they dislike something presented to them. Comparative testing forces genuine discrimination.

One thing I have learned from managing research across multiple industries: sample design matters more than sample size. A survey of two hundred people who genuinely represent your target customer is worth more than a survey of two thousand people who are broadly the right demographic but not actually in the market for what you are building.

How to Use Existing Data Before Commissioning New Research

One of the more consistent mistakes I see is commissioning primary research before exhausting what already exists. Businesses sit on more useful data than they realise, and the cost of analysing it properly is a fraction of a new research programme.

Customer service records are underused as a product insight tool. The complaints, queries, and returns data your support team handles every day is a direct signal of where existing products fall short. If a significant proportion of your support volume relates to a specific use case, that is a product gap worth investigating. I worked with a client in the consumer goods space who discovered, by properly categorising six months of support tickets, that a product variant they had deprioritised was generating disproportionate demand signals through the back door of customer complaints.

Search data is another underused source. What people search for, and how they phrase it, reveals unmet need in real time. The gap between what customers search for and what your product range currently addresses is a product opportunity map. Forrester has written about the gap between data availability and data use in ways that are still relevant, and the pattern holds in product research as clearly as anywhere else.

Review mining, systematically reading and categorising customer reviews of your products and competitors, surfaces the language and concerns that matter most to buyers. The specific words customers use to describe a problem are often more useful than any survey response, because they are unprompted and unfiltered.

The Role of Competitor Analysis in Product Research

Understanding what competitors offer and where they fall short is a core input to product research, not a separate exercise. If you are trying to identify products that would satisfy customers, you need to understand what those customers are currently using and what they find inadequate about it.

This is not about copying what competitors do. It is about mapping the space of existing solutions so you can identify genuine white space. A product that is marginally better than what already exists on a dimension customers do not particularly value is not a satisfying product. A product that addresses a frustration that every existing option shares is.

The most commercially interesting product opportunities tend to sit at the intersection of high customer frustration and low competitive response. Finding that intersection requires systematic competitor analysis alongside customer research, not instead of it.

BCG’s work on innovation, including their research on R&D effectiveness, consistently points to the same principle: the products that succeed are those built around a genuine gap, not those built around what was easiest to develop or most interesting to the internal team.

Connecting Research Output to Actual Decisions

Research that does not change a decision is an expensive document. This sounds obvious, but the failure mode is common. A research programme runs, the findings are presented, the deck is filed, and the product roadmap continues largely unchanged because the research was commissioned too late, framed too broadly, or presented to people who did not have the authority to act on it.

The fix is to define the decision before you design the research. What will you do differently if the evidence suggests X versus Y? If you cannot answer that question clearly, the research is not ready to commission. This is not a methodological point, it is a governance one. Research needs a decision-maker attached to it from the start.

I have judged the Effie Awards, which recognise marketing effectiveness, and the entries that stand out are almost always the ones where the insight was specific enough to drive a concrete product or positioning decision. Vague insight produces vague strategy. The research programmes that produce vague insight are usually the ones where nobody defined what a clear finding would look like before the work started.

Forrester’s framing on how marketing teams can become more systematic is relevant here. The same logic applies to research: structure the process so that outputs connect to inputs, and decisions connect to data.

What Satisfaction Actually Means in a Product Context

Customer satisfaction is not a static target. What satisfies a customer today is shaped by every other product they have used recently. Expectations migrate upward, and they migrate faster in categories where innovation is visible and frequent.

This has a practical implication for research design. If you are researching what would satisfy customers today, you are building a product for today’s expectations. If your development cycle is twelve to eighteen months, the product will launch into a different expectation environment. Research needs to account for trajectory, not just current state.

One way to do this is to include questions in your research that probe for emerging frustrations, not just current ones. What do customers find acceptable now but would prefer to be better? What have they seen in adjacent categories that they wish existed in yours? These questions surface the direction of travel rather than just the current position.

I have seen this play out in practice with clients who operated in categories where the pace of change was faster than their research cadence. By the time a product launched, the insight that had shaped it was already dated. The answer is not to research faster, it is to build ongoing customer listening into the process rather than treating research as a project with a start and end date.

Building a Research Process That Is Proportionate to the Decision

Not every product decision warrants a full research programme. A new flavour variant in an established range is a different scale of decision from a category extension or a new market entry. The research investment should be proportionate to the commercial risk and the size of the decision.

For smaller decisions, a lightweight approach works: a rapid round of customer interviews, a review of existing data, and a structured review of competitor gaps. For larger decisions, the investment in a full quantitative programme with conjoint and concept testing is justified by the cost of getting it wrong.

The mistake is applying the same approach to every decision regardless of scale. Businesses that run full research programmes for minor product decisions burn budget and slow down. Businesses that skip research for major ones take on avoidable risk. The discipline is in calibrating the method to the decision, not defaulting to one approach for everything.

One practical framework: before commissioning any research, estimate the cost of being wrong. If the cost of launching a product that misses the mark is five million pounds, spending fifty thousand on research to reduce that risk is straightforward to justify. If the cost of being wrong is fifty thousand, a fifty thousand pound research programme is not proportionate.

The Honest Limitation of Research

Research tells you what customers think they want. It is less reliable at predicting what they will actually do when faced with a real purchasing decision, a real price, and real competition on a real shelf or website.

This is not an argument against research. It is an argument for treating research output as directional rather than definitive, and for building in real-world testing wherever possible. A limited market test or a minimum viable product launch in a contained geography will tell you things that no amount of pre-launch research can.

The best product research programmes I have seen combine pre-launch insight with post-launch measurement. The research shapes the initial product and positioning. Early sales data, customer feedback, and return rates then inform the refinement. Treating the launch as the end of the research process, rather than the beginning of a new phase, is where many businesses lose the thread.

The broader challenge of connecting research to measurement is one that runs through everything in market research. If you want to go deeper on the methods and frameworks that make this work in practice, the market research section of The Marketing Juice covers the full landscape, from research design through to translating findings into strategy.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the most reliable method for identifying products customers will actually buy?
Conjoint analysis combined with in-depth customer interviews gives the most reliable pre-launch signal. Conjoint reveals how customers trade off features against price without requiring them to articulate it directly. Interviews surface the needs and frustrations that surveys miss. Neither alone is sufficient, but together they give you a defensible basis for product decisions.
How many customer interviews do you need to conduct useful product research?
For qualitative research, ten to fifteen interviews with well-selected participants who genuinely represent your target segment will typically surface the main themes. Beyond that, you start hearing the same things repeated. The quality of participant selection matters more than the number of interviews. A poorly recruited sample of fifty will give you less than a well-recruited sample of twelve.
What data sources can businesses use before commissioning primary research?
Customer service records, search query data, product reviews, returns data, and social listening are all valuable starting points. These sources are often underused because they require categorisation and analysis rather than arriving pre-packaged as insight. Systematically reviewing six months of support tickets or customer reviews can surface product gaps that would take a significant primary research budget to find from scratch.
How do you ensure product research leads to actual decisions rather than just reports?
Define the decision before you design the research. Ask: what will we do differently if the evidence suggests X versus Y? If you cannot answer that clearly, the research brief is not ready. Attach a decision-maker to the research programme from the start, and set a clear threshold for what the findings need to show before a product direction is confirmed or abandoned.
How often should businesses conduct product satisfaction research?
Customer expectations shift continuously, so research should be ongoing rather than project-based. A lightweight quarterly customer listening programme, combining a small number of interviews with review analysis and support data review, will keep you closer to shifting expectations than an annual research project. Reserve larger quantitative programmes for major product decisions where the commercial stakes justify the investment.

Similar Posts