Product Research That Starts With Customers

Identifying products that satisfy customers is not a creative exercise. It is a research problem, and most companies approach it backwards, starting with what they want to build and then looking for evidence that customers might want it too. The more reliable approach starts with customer behaviour, unmet needs, and the gap between what people settle for and what they would genuinely prefer.

Done well, product research reduces the risk of launching something the market does not need. Done poorly, it gives leadership a false sense of validation and accelerates spending in the wrong direction.

Key Takeaways

  • Product research that starts with what you want to build, rather than what customers struggle with, is not research. It is confirmation bias with a budget attached.
  • Behavioural data tells you what customers do. Qualitative interviews tell you why. You need both before drawing conclusions about product fit.
  • The most useful product insights often come from studying what customers tolerate, not what they say they love. Tolerance signals unmet need.
  • Internal assumptions about customer needs should be treated as hypotheses, not facts. Every assumption should have a research method attached to it.
  • Measuring whether a product actually satisfied customers after launch is as important as the research done before it. Most companies skip this step entirely.

Early in my agency career, I worked with a mid-sized retailer that had spent considerable budget developing a loyalty product their customers had supposedly asked for. The problem was that the “asking” had happened in a single focus group, conducted by the internal team, with questions that were leading enough to make a barrister wince. The product launched, performed poorly, and the post-mortem revealed what better research would have shown upfront: customers did not want more points. They wanted faster checkout and better stock availability. Two entirely different problems, requiring entirely different solutions.

Why Most Product Research Produces the Wrong Answers

The fundamental error in most product research is that it is designed to validate rather than to discover. Teams arrive with a product concept and then design research to confirm it. Participants are shown prototypes before they have been asked about their problems. Surveys are written with response options that happen to align with the features already planned. The research becomes theatre, and the resulting data becomes a commercial justification rather than a genuine signal.

This matters because the cost of getting it wrong is not just the failed product. It is the opportunity cost of not building the right one. I have seen this pattern across multiple industries, and it tends to be worse in organisations where product teams and marketing teams operate in silos. Each group interprets the customer through its own lens, and nobody is systematically responsible for understanding what would genuinely improve the customer’s situation.

There is a broader point here about the role of marketing in product development. If a company genuinely understood what customers needed and built products that delivered it, marketing would be easier, cheaper, and more effective. The companies that struggle most with marketing are often the ones with product-market fit problems they have not diagnosed. They spend more on acquisition to compensate for poor retention, and they mistake declining conversion rates for a media problem when it is a product problem.

If you are building out your broader research capability, the Market Research and Competitive Intel hub covers the full range of methods, from competitive analysis to demand sensing, and is worth reading alongside this article.

Starting With the Customer’s Problem, Not Your Product Idea

The most productive product research begins with a clear-eyed look at what customers are struggling with, what they are tolerating, and where the gap is between their current experience and a genuinely better one. This is different from asking customers what they want. People are not good at articulating what they want, particularly when it comes to products that do not yet exist. What they are good at is describing frustration, workarounds, and the moments where things go wrong.

Structured qualitative interviews are the most reliable starting point. The goal is not to present options but to map the customer’s experience in detail, from the moment a need arises to the point where it is resolved, or not resolved. You are looking for friction points, compensating behaviours, and moments where the customer settles for something less than what they actually need.

A useful frame here is the distinction between what customers do and what they say. Behavioural data, from purchase history, support tickets, product usage logs, and search patterns, tells you what customers actually do when left to their own devices. Qualitative research tells you how they interpret and explain their own behaviour. Both are necessary. Relying only on what customers say gives you aspirational answers. Relying only on what they do gives you patterns without explanation.

When I was working with a financial services client a number of years ago, their usage data showed that a significant portion of customers who signed up for a premium tier downgraded within 90 days. The internal assumption was that the product was priced too high. When we ran structured interviews with churned customers, the real issue was that the onboarding experience was confusing enough that customers never reached the features they had paid for. The product was fine. The experience around it was not. No amount of price testing would have fixed that.

Quantitative Research: What the Numbers Can and Cannot Tell You

Once you have a working hypothesis about customer needs, quantitative research helps you understand the scale and distribution of those needs across a broader population. Surveys, conjoint analysis, and market sizing exercises all have a role here, but each comes with limitations that are worth being clear about before you invest in them.

Surveys are useful for measuring the prevalence of a problem or the relative importance of different product attributes across a sample. They are not useful for generating insights you have not already thought of. A survey can tell you that 60 percent of respondents find checkout friction to be a significant issue. It cannot tell you that the real problem is the payment method options available, unless you already knew to ask about that. This is why qualitative research should come first. It generates the hypotheses. Quantitative research tests their scale.

Conjoint analysis is more sophisticated and genuinely useful when you need to understand trade-offs. It asks respondents to choose between product configurations rather than rate individual features, which produces more reliable data about what people would actually choose versus what they say they value. The limitation is complexity. Running a well-designed conjoint study requires statistical rigour, and the results are only as good as the attributes you chose to include in the first place.

Market sizing is often treated as a research activity when it is really a modelling exercise. You are making assumptions about the total addressable population, the proportion with a given need, and the proportion likely to switch to a new solution. Each assumption compounds the uncertainty in the final number. BCG’s work on experience curve economics is a useful reminder that market position and cost structure are often better predictors of commercial success than market size alone. A large market with entrenched competitors at scale is frequently harder to enter than a smaller market with fragmented supply.

Competitive Analysis as a Product Research Input

Understanding what competitors offer is a standard part of product research, but most competitive analysis stops at the surface level. Teams catalogue features, compare pricing, and map positioning. What they rarely do is analyse where competitors are failing their customers, which is where the real product opportunity often sits.

Review mining is underused as a research method. Reading through customer reviews of competing products, particularly the three and four star reviews rather than the extremes, gives you a detailed picture of what customers like, what they tolerate, and what they wish were different. This is free, publicly available, and frequently more honest than anything you will get from a commissioned survey.

Support forum analysis, Reddit threads, and community discussions serve a similar function. People who are frustrated enough to post publicly about a product are giving you a detailed brief on unmet needs. The signal-to-noise ratio varies, but the effort required to sift through it is low relative to the insight it can generate.

The BCG piece on business model transformation in IT services makes a point that applies equally to product development: companies that define their competitive position too narrowly tend to miss the structural shifts that create new customer needs. Competitive analysis should include adjacent categories, not just direct competitors. Customers do not always solve problems with the obvious solution. Understanding the full range of alternatives they consider, including doing nothing, is part of understanding the competitive landscape properly.

Testing Before You Build: Prototypes, MVPs, and Smoke Tests

The gap between a customer saying they want something and actually paying for it is significant. One of the most common mistakes in product development is treating expressed interest as purchase intent. Research methods that involve no commitment from the respondent tend to overestimate demand. The solution is to design research that requires some form of genuine action before you invest in full development.

Prototype testing, where customers interact with a working or simulated version of the product, is more reliable than concept testing because it surfaces usability problems and mismatched expectations before they become expensive. The goal is not to get positive feedback. It is to watch what customers do when they actually try to use the product, which is frequently different from what they said they would do.

Smoke tests, where you present a product page or offer to real potential customers and measure whether they take action, such as signing up, clicking through, or entering payment details, are a more direct measure of demand. The ethical version of this involves being transparent about the product being in development, but the behavioural signal you get from a real decision is orders of magnitude more reliable than a survey response.

I ran a version of this with a client in the professional services sector who wanted to launch a new subscription product. Rather than building the product and hoping, we created a landing page describing the offer and ran a small paid campaign to drive traffic. The conversion rate was low enough to suggest that either the positioning was wrong or the demand was not there at the price point they had in mind. We ran a second version with different positioning and a lower price point. The conversion rate improved enough to justify a limited pilot. The product that eventually launched was substantially different from the one originally planned, and it launched into a market we had already tested rather than one we had assumed.

The Role of Customer Satisfaction Data in Product Research

Product research is not only a pre-launch activity. Understanding whether your existing products are satisfying customers is a continuous research function, and the findings from that work should feed directly into product development decisions. Most companies collect satisfaction data in some form but use it primarily for reporting rather than for decision-making.

Net Promoter Score is the most commonly used metric, and it has real limitations. A single number tells you very little about why customers are satisfied or dissatisfied, or what would need to change to shift the score. It is a useful aggregate indicator but a poor diagnostic tool. The more useful version of satisfaction research asks customers to rate specific aspects of the product experience and to identify the one or two things that, if improved, would make the most difference to them.

Forrester’s work on measuring marketing’s contribution to business performance is relevant here because it highlights the challenge of attributing outcomes to specific inputs. The same problem applies to product satisfaction. If you improve three things simultaneously, you cannot easily determine which improvement drove the change in satisfaction scores. Sequenced changes with clear measurement periods are more informative than broad product overhauls, even if they are slower.

There is also a structural point worth making about who owns customer satisfaction data in most organisations. Marketing tends to own brand tracking. Product teams own usage data. Customer service owns complaint volumes. Nobody owns the integrated view of whether customers are genuinely satisfied and what is driving the variation. In my experience, the companies that do product research well are the ones that have solved this ownership problem and created a shared view of the customer that crosses functional boundaries.

Translating Research Into Product Decisions

Research that does not change decisions is an expensive way to feel thorough. The point of product research is to reduce uncertainty in a specific decision, and the research design should be driven by what decision you are trying to make. Before commissioning any research, the question worth asking is: what would we do differently if the research came back with a different answer? If the answer is nothing, the research is not necessary.

The translation from research findings to product decisions requires a clear framework for prioritisation. Not all customer needs are equally important, equally common, or equally feasible to address. A need that is widely felt but easy to work around is a lower priority than a need that is less common but creates significant friction when it occurs. Effort-to-impact mapping, where you assess the development cost of addressing each identified need against the likely impact on satisfaction and commercial performance, is a more rigorous approach than gut feel, even when the inputs to the model are imprecise.

There is a version of this that I have seen go wrong repeatedly in agency settings. A client commissions research, the research produces findings, and then the findings sit in a presentation deck that nobody refers to when the product roadmap is being built three months later. The research was good. The process for translating it into decisions was missing. Building that translation process, including clear ownership, defined decision points, and a mechanism for surfacing research findings at the right moment in the planning cycle, is as important as the quality of the research itself.

MarketingProfs has documented how customer-first organisations tend to operate differently at a structural level, with customer priorities embedded in business systems rather than treated as a separate function. The insight is that putting customers first is a process design problem as much as a culture problem. You need systems that surface customer data at the right points in the decision-making process, not just a stated commitment to customer centricity.

The broader point is that product research is only as valuable as the decisions it informs. If you are building out a systematic approach to market research across your organisation, the Market Research and Competitive Intel hub has a range of articles covering how to structure that capability, from competitor analysis to demand research and beyond.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the best research method for identifying products customers want?
There is no single best method. The most reliable approach combines structured qualitative interviews to identify unmet needs, behavioural data to understand what customers actually do, and quantitative research to measure the scale of those needs across a broader population. Starting with qualitative research before moving to quantitative is usually the more productive sequence, because qualitative work generates the hypotheses that quantitative research then tests.
How do you avoid confirmation bias in product research?
Confirmation bias in product research usually enters through question design, participant selection, or the interpretation of findings. Practical safeguards include writing research questions before you have a product concept, having someone outside the product team review the research design, and separating the people who run the research from the people who have a stake in a particular outcome. Treating internal assumptions as hypotheses that need to be tested, rather than facts that need to be confirmed, is the mindset shift that matters most.
How can you test product demand before investing in full development?
Smoke tests are one of the most direct methods. Creating a landing page that describes the product and driving real traffic to it gives you a behavioural signal, whether people take action, that is more reliable than survey-based intent measures. Prototype testing with small groups of target customers surfaces usability problems and expectation gaps before they become expensive. Both methods require some investment but substantially less than building a product that the market does not want.
What is the difference between product research and market research?
Market research is the broader category, covering demand sizing, competitive analysis, customer segmentation, and category dynamics. Product research is a subset of market research focused specifically on understanding what customers need, what they would value in a product, and whether a specific product concept would satisfy those needs. Product research draws on market research methods but applies them to a more specific question: would this product, or a version of it, genuinely improve the customer’s situation?
How do you measure whether a product has satisfied customers after launch?
Post-launch satisfaction research should go beyond aggregate scores like Net Promoter Score and ask customers to evaluate specific aspects of the product experience. Tracking the gap between what customers expected before purchase and what they experienced after it is particularly useful. Retention rates, repeat purchase behaviour, and support ticket volumes are behavioural indicators that complement survey-based measures. The most informative approach sequences changes and measures each one separately, so you can identify which improvements are actually driving satisfaction rather than assuming all changes are contributing equally.

Similar Posts