Before You Build It: How to Research Whether a New Product Will Sell

Researching whether a new product will sell means testing demand before you commit budget to development, production, or launch. The goal is not to eliminate risk, it is to reduce the cost of being wrong. Done well, product validation research tells you whether real people will pay real money for what you are proposing, and under what conditions.

Most businesses skip this step or do it badly, confusing enthusiasm from colleagues and friends with evidence of market demand. The two are not the same, and conflating them is one of the more expensive mistakes I have seen companies make.

Key Takeaways

  • Demand research and product enthusiasm are different things. Internal excitement is not a proxy for market willingness to pay.
  • Keyword and search data tells you what people are actively looking for, not just what they say they want in a survey.
  • Competitor analysis is not just about pricing. It reveals gaps, positioning failures, and underserved segments worth targeting.
  • Small-scale paid tests, landing page experiments, and pre-orders are among the most commercially honest validation methods available.
  • The question is not “will anyone buy this?” It is “will enough people pay enough money to make this worth building?”

Over the years, I have worked across more than 30 industries, from fast-moving consumer goods to enterprise software to financial services. The pattern I see repeatedly is this: companies invest in marketing to solve a problem that is actually a product problem. If a product genuinely solves a real problem for a clearly defined audience, marketing accelerates growth. If it does not, marketing is just an expensive way to find out sooner. Getting the research right before you build saves you from that outcome.

If you are thinking about this in the context of a broader go-to-market plan, the Go-To-Market and Growth Strategy hub covers the commercial frameworks that sit around product launch, from audience development to channel strategy to revenue planning.

Why Most Product Research Fails Before It Starts

The failure mode I see most often is not a lack of research. It is research designed to confirm a decision that has already been made. Someone in the business has fallen in love with an idea, and the “research” becomes a search for supporting evidence rather than a genuine test of viability.

I saw this clearly when I was running a turnaround at an agency that had been pitching a new service line to existing clients. The internal conviction was strong. The founder had done informal conversations with three clients who said it sounded interesting. That was the validation. When we actually ran structured research with a broader sample, the picture was quite different. The idea was interesting to people. It was not something they would pay for at the margin required to make it commercially viable.

There is a difference between “that sounds useful” and “I will give you money for that.” Good product research is designed to surface that gap, not paper over it.

A second common failure is researching the wrong question. Teams ask “do people like this idea?” when they should be asking “is there a problem here that is painful enough, frequent enough, and underserved enough that people will change their behaviour and pay to solve it?” Those are different questions, and they require different methods.

Start With the Problem, Not the Product

Before you research whether your product will sell, you need to research whether the problem it solves is real and significant. This sounds obvious. In practice, it is the step most teams skip because they have already moved mentally from “problem” to “solution.”

The questions worth answering at this stage are: How frequently does this problem occur? How much does it cost the person experiencing it, in time, money, or frustration? What are they currently doing to solve it? Why is that solution inadequate? These questions can be answered through a combination of qualitative interviews, online community analysis, and review mining.

Review mining is underused. Go to Amazon, G2, Trustpilot, or any review platform relevant to your category. Read the one-star and two-star reviews of existing products. The language people use to describe what is missing or broken is some of the most commercially valuable copy you will ever find. It tells you exactly what the market wants that it is not getting. That is your product brief.

For B2B products, the approach is similar but the sources differ. LinkedIn conversations, industry forum threads, and customer advisory boards are more useful than consumer review platforms. If you are operating in a regulated or specialised sector, the research layer is even more important. I have written separately about the specific dynamics of B2B financial services marketing, where product-market fit research has to account for compliance constraints and long buying cycles that consumer product frameworks simply do not address.

How to Use Search Data to Validate Demand

Search data is one of the most honest signals available to product researchers. Unlike surveys, people do not perform for a search engine. When someone types a query into Google, they are expressing a genuine need in real time. Aggregated across millions of searches, that data tells you the shape and scale of demand in a way that focus groups cannot.

The mechanics are straightforward. Use a keyword research tool to look at search volume for terms related to your product category. Look at the language people use, not just the volume. Are they searching for solutions (“how to fix X”) or products (“best X tool”) or comparisons (“X vs Y”)? Each of these signals a different stage of intent and a different type of demand.

Pay attention to trend data, not just absolute volume. A category with growing search volume over 24 months is a different commercial proposition from one with flat or declining volume. Tools like SEMrush’s growth analysis features can help you map this over time and identify whether you are entering a growing or contracting space.

Also look at what is currently ranking for the most relevant terms. If the top results are weak, thin, or clearly not purpose-built for the query, that is a signal of an underserved market. If the results are dominated by well-funded, well-established players with strong brand recognition, you need a very clear differentiation story before you enter.

Search data will not tell you everything. It tells you about expressed demand, not latent demand. If your product is genuinely new and people do not yet know to search for it, keyword volume will be low or zero. That is not necessarily a red flag, but it does mean you need other validation methods and a stronger go-to-market investment in category creation.

What Competitor Analysis Actually Tells You

Most competitor analysis is done badly. Teams list out competitors, note their pricing, screenshot their websites, and call it done. That misses the most commercially useful information.

What you are actually trying to understand from competitor analysis is: what are existing players doing poorly, what segments are they ignoring, and what do their customers complain about? The answers to those questions tell you where the market has space for a new entrant.

Look at competitor advertising. What claims are they making? What problems are they leading with? This tells you what the market currently believes is important. If every competitor is leading with the same benefit, that is either a genuine table-stakes requirement or a category blind spot where differentiation is possible.

Look at their pricing architecture carefully. Are they leaving segments underserved at the low or high end? Is there a usage pattern that their pricing model penalises? Pricing gaps are often product opportunities in disguise.

When I was involved in a digital marketing due diligence process for a portfolio company, the competitor audit was the single most revealing piece of work. It showed that the three main players in the category were all competing on the same two dimensions and ignoring a third dimension that customers consistently ranked as important. That was the product brief for the new entrant, and it was sitting in plain sight in competitor review data.

Running Qualitative Research Without Wasting Time

Qualitative research, meaning interviews, focus groups, and observational research, is valuable but easy to misuse. The most common misuse is treating it as validation when it is actually exploration. Qualitative research tells you why people think and behave the way they do. It does not tell you how many of them do.

For product validation purposes, qualitative interviews are most useful in the early stages when you are still defining the problem and mapping the customer’s current behaviour. A structured interview with 10 to 15 people who represent your target customer will surface the language, the pain points, the workarounds, and the emotional context around the problem you are trying to solve. That is enormously useful for product design and positioning.

The discipline required is to ask about behaviour and experience, not about your product idea. “Walk me through the last time you had to deal with X” is a better question than “what do you think of this concept?” The first question surfaces real behaviour. The second invites social performance. People are generally too polite to tell you your idea is weak.

If you are selling to businesses, the interview process requires more patience. Decision-makers are harder to access, buying processes are more complex, and the person you are talking to may not be the person who controls the budget. Understanding the full buying committee is part of the research. The corporate and business unit marketing framework for B2B tech companies is useful context here, particularly for understanding how product decisions get made across different levels of an organisation.

How to Test Demand With Real Money

The most honest form of product validation is asking people to part with money before the product exists. Everything else, surveys, interviews, concept tests, is a proxy. Actual purchasing behaviour is the real signal.

There are several practical ways to do this without building the full product. A landing page with a clear value proposition and a pre-order or waitlist mechanism will tell you more in two weeks of paid traffic than six months of internal debate. The conversion rate on that page, combined with the cost of acquiring that traffic, gives you a rough unit economics model to stress test.

Crowdfunding platforms serve a similar function for physical products. A successful crowdfunding campaign is not just a funding mechanism. It is a demand signal with real financial commitment behind it. BCG’s work on go-to-market strategy for product launches makes the point that pre-launch demand signals are among the most reliable indicators of commercial viability, even in complex, regulated categories like biopharma.

For B2B products, the equivalent is a paid pilot or a letter of intent. If a prospective customer will not commit to a paid pilot at a meaningful discount, that tells you something important about how they actually value the solution versus how they describe valuing it in conversation.

I have used pay per appointment lead generation as a validation mechanism in early-stage B2B product launches. If you can generate qualified sales conversations at a cost that makes commercial sense, you have early evidence that the audience exists, can be reached, and is interested enough to engage. That is a meaningful data point before you invest in full product development.

Quantitative Research: What to Measure and What to Ignore

Surveys are the most commonly used quantitative research tool and the most commonly misinterpreted. The problem is not with surveys as a method. It is with how the results are read.

When you ask people in a survey whether they would buy a product, the “yes” rate is almost always meaningfully higher than actual purchase behaviour. People overestimate their likelihood to act on intentions. A survey that shows 40% of respondents saying they would “definitely” or “probably” buy your product does not mean 40% market penetration is achievable. The gap between stated intent and purchase behaviour is one of the more reliable findings in consumer psychology.

What surveys are useful for is understanding relative preference, price sensitivity, and feature prioritisation. Conjoint analysis, a technique where respondents choose between product configurations with different attributes and price points, is particularly useful for understanding what people will trade off against what. It forces choices rather than allowing respondents to say they want everything.

Price sensitivity research deserves its own attention. The Van Westendorp price sensitivity model asks four questions about price points, too cheap to trust, a bargain, beginning to seem expensive, and too expensive to consider. The overlap between the acceptable price range and your cost structure tells you quickly whether the unit economics of the product are viable before you build anything.

Channel Fit Is Part of the Research

A product can have genuine demand and still fail commercially if the cost of reaching buyers is too high relative to the margin available. Channel fit, the alignment between where your buyers are and how efficiently you can reach and convert them, is part of product viability research, not a separate question to answer later.

I have seen this play out in category after category. A product with real demand but a customer acquisition cost that exceeds lifetime value is not a viable business. Understanding the cost structure of reaching your target audience before you commit to a launch is commercially essential.

For some products, endemic advertising, placing your message in environments where your audience is already engaged with the relevant topic, offers a more efficient route to early customers than broad-reach channels. Understanding which channels your target audience uses, and at what cost, is part of the research process.

Forrester’s work on intelligent growth models makes the case that sustainable commercial growth requires alignment between product, audience, and channel from the outset. Retrofitting channel strategy after the fact is expensive and often ineffective.

The Vidyard team has written usefully about why go-to-market feels harder now than it did five years ago. Part of the answer is channel fragmentation. Buyers are harder to reach efficiently, which raises the bar on product-market fit. If your product is a marginal improvement on existing solutions, the cost of getting that message to the right people at scale is likely to erode the margin. If it solves a genuinely painful problem in a meaningfully better way, the economics improve significantly.

Turning Research Into a Go or No-Go Decision

Research does not make decisions. People make decisions. But good research gives you the commercial framework to make a decision you can defend and, more importantly, learn from.

The decision framework I use is built around three questions. First: is there evidence of a real, painful, and underserved problem? Second: is there evidence that the target audience will pay a price that makes the unit economics viable? Third: is there a realistic route to reaching that audience at a cost that supports those economics?

If the answer to all three is yes, you have a viable product hypothesis worth investing in. If one of the three is unclear, that is the area to stress test further before committing. If two or more are unclear, you are not ready to build yet.

The BCG framework for commercial transformation and go-to-market strategy emphasises that the companies that grow consistently are those that make disciplined decisions about where to compete, not just how to compete. That discipline starts with research that is honest about what it has and has not found.

One last thing worth saying. A thorough review of your existing web presence, messaging, and commercial infrastructure before a product launch will often reveal positioning gaps and conversion weaknesses that would undermine even a well-validated product. The checklist for analysing your company website for sales and marketing strategy is a useful pre-launch audit tool for exactly this reason.

Product research does not end at launch. The signals you collect from early customers, conversion rates, churn, support tickets, referral behaviour, are the continuation of the same research process. The companies that get this right treat launch as a hypothesis test, not a declaration of certainty.

More on the commercial frameworks that sit around this kind of decision-making is available in the Go-To-Market and Growth Strategy hub, which covers everything from audience development to channel selection to revenue modelling across different business contexts.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the fastest way to validate whether a new product will sell?
The fastest commercially honest method is a landing page with a clear value proposition, driven by a small paid traffic test. Conversion rate and cost per conversion give you a rough demand signal within days. Pre-orders or paid pilots add financial commitment to the signal, which makes the data more reliable than survey responses or stated intent.
How do you use keyword research to validate product demand?
Search data shows what people are actively looking for, which is a more reliable signal than what they say in surveys. Look at search volume for problem-oriented and solution-oriented queries in your category, analyse trend direction over 12 to 24 months, and assess the quality of what is currently ranking. Weak or thin results for high-intent queries often indicate an underserved market.
How many customer interviews do you need for product validation research?
For qualitative exploration, 10 to 15 structured interviews with people who represent your target customer will typically surface the main themes, pain points, and behavioural patterns. The goal is not statistical significance, it is depth of understanding. More interviews add diminishing returns once you start hearing the same themes repeated. Qualitative research answers “why,” not “how many.”
What is the biggest mistake companies make when researching a new product?
Designing research to confirm a decision that has already been made internally. When teams are emotionally committed to an idea, they tend to ask questions that invite agreement rather than challenge. The result is research that feels like validation but is actually rationalisation. Good product research is structured to find reasons not to proceed, and the product passes when it survives that scrutiny.
Does product research work differently for B2B versus B2C products?
The principles are the same but the mechanics differ. B2B research requires understanding the full buying committee, not just one stakeholder. Decision cycles are longer, the cost of switching is often higher, and the criteria for evaluation are more complex. Price sensitivity research needs to account for ROI framing rather than personal value judgements. Qualitative interviews are harder to arrange but often more revealing because the stakes for the respondent are professional, not just personal.

Similar Posts