Market Validation: Stop Building Things Nobody Wants
Market validation is the process of testing whether real demand exists for a product, service, or idea before you commit significant resources to it. Done properly, it replaces assumption with evidence, narrows the gap between what you think the market wants and what it will actually pay for, and gives you a defensible basis for investment decisions.
Most businesses skip it, rush it, or confuse internal enthusiasm with external demand. The result is predictable: products that launch to silence, campaigns built on guesswork, and budgets spent proving the market wrong.
Key Takeaways
- Market validation is not about proving you’re right. It’s about finding out whether you’re wrong before it costs you.
- Surveys and focus groups tell you what people say. Search data, purchase behaviour, and waitlists tell you what people do. Weight the latter more heavily.
- Validation should happen in stages: concept, positioning, pricing, and channel, each with its own signal threshold before you move forward.
- The biggest validation failure is confirmation bias: running tests designed to confirm a decision already made, then calling the result evidence.
- A validated market still requires a validated go-to-market. Demand existing is not the same as you being positioned to capture it.
In This Article
Why Validation Fails Before It Starts
The most common reason market validation fails has nothing to do with methodology. It fails because the people running it have already made up their minds.
I’ve sat in enough planning sessions to recognise the pattern. Someone senior has a strong conviction about a product or market opportunity. Validation gets commissioned, but the brief is shaped to support the conclusion rather than test it. Questions are written to elicit positive responses. Samples are drawn from existing customers who are already predisposed to like the brand. Results come back looking encouraging. The project gets greenlit. Six months later, the launch underperforms and everyone acts surprised.
This isn’t cynicism. It’s a structural problem. Validation is often funded by the same people who proposed the idea it’s meant to test. That creates an incentive to produce findings that move the project forward, not findings that stop it. If you want honest validation, you need to separate the people running the research from the people who benefit from a positive result.
The other failure mode is treating validation as a single event rather than a staged process. You test the concept once, get a green light, and assume everything downstream, including positioning, pricing, and channel fit, will sort itself out. It won’t. Each of those elements needs its own validation signal before you commit spend to it.
What You’re Actually Testing
Market validation isn’t one question. It’s four, and they need to be answered in sequence.
The first is whether the problem exists at meaningful scale. Not whether some people experience a problem, but whether enough people experience it acutely enough to motivate a purchase decision. This is where most early-stage validation work should concentrate. A problem that exists but isn’t painful enough to pay to solve is not a market opportunity. It’s a market observation.
The second is whether your solution is the right answer to that problem. This is distinct from the first question. You can correctly identify a real problem and still propose a solution that the market rejects, because the framing is wrong, the format doesn’t fit the workflow, or a good-enough alternative already exists. Testing the concept means testing whether your specific approach resonates, not just whether the category makes sense.
The third is whether the economics work. Willingness to pay is one of the most consistently overestimated variables in early market research. People will tell you they’d pay for something. What they actually spend money on is a different dataset. Price sensitivity testing, conjoint analysis, and real purchase behaviour from small-scale pilots will give you a more honest read than survey responses asking “how much would you pay for this?”
The fourth is whether you can reach the market cost-effectively. Demand existing doesn’t mean you can access it profitably. Channel validation, understanding where your target audience can be reached, at what cost, and with what conversion rate, is the piece most often omitted from validation frameworks. It’s also the piece that tends to break business models in practice.
If you’re building out your broader research and intelligence infrastructure alongside validation work, the Market Research & Competitive Intel hub covers the full range of tools and frameworks worth having in place.
The Difference Between What People Say and What People Do
Stated preference and revealed preference are not the same thing, and conflating them is one of the most expensive mistakes in market research.
Stated preference is what people tell you in surveys, interviews, and focus groups. It’s useful for understanding how people frame problems, what language they use, and what they say they value. It’s unreliable for predicting purchase behaviour, because people are poor judges of their own future decisions and because social desirability bias distorts responses in a research setting.
Revealed preference is what people actually do. Search volume tells you how many people are actively looking for a solution to a problem. Click-through rates on test ads tell you whether your positioning creates enough interest to motivate action. Waitlist sign-ups tell you whether people are willing to give you something of value, even if only an email address, in exchange for a promise. And actual purchase data from a pilot or pre-sale tells you whether demand translates into revenue.
Early in my career, I made the mistake of treating customer interviews as predictive. We’d run a series of conversations, hear consistent enthusiasm, and walk away feeling confident. What I learned over time is that enthusiasm in a conversation is easy. Enthusiasm at the point of purchase is the test that matters. The gap between the two is where most validation frameworks fall apart.
Search data is one of the most underused validation inputs. If nobody is searching for the problem your product solves, that’s a signal worth taking seriously. It doesn’t mean the market doesn’t exist, but it does mean you’ll need to create demand rather than capture it, and that’s a fundamentally different and more expensive commercial challenge. Tools that surface gap analysis through search intent can help identify whether a category has genuine search-driven demand or whether you’re working in a space people don’t yet have language for.
How to Structure a Validation Process That Produces Honest Signals
A validation process worth running has three properties: it’s staged, it uses multiple signal types, and it has pre-defined thresholds that determine whether you proceed.
Staged means you don’t try to validate everything at once. You start with the highest-risk assumption, the thing that, if wrong, makes everything else irrelevant. For most businesses, that’s whether the problem is real and acute enough to motivate action. Once that’s established, you move to concept validation, then pricing, then channel. Each stage has a gate. If you don’t clear the gate, you stop, adjust, or kill the project.
Multiple signal types means you’re not relying on a single data source. Qualitative research gives you depth and language. Quantitative research gives you scale and statistical confidence. Behavioural data, search trends, click rates, sign-ups, pilot purchases, gives you the most honest signal of all. None of these is sufficient on its own. The picture you’re building requires all three.
Pre-defined thresholds mean you decide in advance what a positive result looks like. Not after you’ve seen the data. If you wait until you have results before deciding what counts as success, you’ll find ways to rationalise whatever you got. If you’ve agreed beforehand that a pilot needs to achieve a specific cost per acquisition to indicate viable unit economics, you have a clear, defensible decision rule. This is the discipline that separates validation from post-hoc justification.
When I was running agency growth projects, we applied a version of this to new service lines. Before we invested in building out a capability, we’d run a small number of paid pilots at a price point that reflected our actual cost structure, not a discounted introductory rate. If clients would pay the real price for the real service, we had a signal. If they’d only engage at a discount, we had a different kind of signal, one that told us the economics wouldn’t work at scale even if the demand was there.
Minimum Viable Tests Worth Running
The best validation tests are cheap, fast, and produce behavioural data rather than opinions. Here are the ones that have consistently proven useful.
Landing page tests. Build a simple page describing the product or service as if it exists. Drive paid traffic to it. Measure sign-ups, enquiries, or pre-purchase intent. The cost of a small paid search or social campaign is negligible compared to the cost of building something nobody wants. The conversion rate on that page, against a defined benchmark, tells you whether the proposition generates genuine interest. This approach has been used by everyone from early-stage startups to large enterprises testing new category entries, and it remains one of the most reliable low-cost validation tools available.
Concierge pilots. Before you build the product, manually deliver the outcome. If you’re planning to build a software tool that automates a workflow, do that workflow by hand for a small number of paying customers. You’ll learn whether the outcome is valued, what the experience needs to feel like, and whether the economics are viable, all before you’ve written a line of code or invested in production infrastructure.
Pre-sales. If people will pay for something before it exists, you have the most honest validation signal available. Pre-sales require real commitment from real customers. They filter out the people who say they’re interested but wouldn’t actually part with money. They also give you early revenue to fund development, which is a useful side effect.
Competitive proxy testing. Look at how existing alternatives are performing. If competitors in the space are growing, attracting investment, and retaining customers, that’s a form of market validation for the category. If they’re struggling despite solid execution, that’s a signal too. Understanding what optimisation and testing frameworks look like in adjacent categories can help you calibrate what good looks like before you run your own tests.
Ad creative tests. Run small paid campaigns testing different positioning angles against the same audience. Which headline gets clicked? Which value proposition generates the most engagement? This isn’t just useful for validation, it’s useful for positioning development. The language that performs in a test environment is often the language that should lead your go-to-market. Understanding how audiences respond to different creative signals, including how attention and creative cut-through work in digital environments, sharpens how you interpret these results.
When Validation Gives You a Complicated Answer
Validation doesn’t always produce a clean yes or no. More often it produces a nuanced signal that requires interpretation.
You might find that demand exists, but only in a segment smaller than your business case assumed. That’s useful information. It might mean the opportunity is real but the addressable market requires recalibration. It might mean you need to find a higher-value niche before you can consider broader expansion. It might mean the business model only works at a price point you haven’t tested yet.
You might find that demand exists for the category but not for your specific approach. That’s a positioning problem, not a market problem, and it’s solvable. The validation has done its job by identifying where the gap is.
You might find that the economics work in some channels but not others. A product that converts well through content-driven organic search but not through paid social isn’t a failed validation. It’s a channel strategy finding. The question becomes whether the viable channel can deliver sufficient volume to build a business on.
I’ve seen businesses kill projects on the basis of ambiguous validation results when the right response was to narrow the scope and test again. And I’ve seen businesses push forward on the basis of partial positive signals when the right response was to stop. The discipline is in reading what the signal is actually telling you, not what you want it to tell you.
One thing that’s become clearer to me over time: how your content and positioning surfaces in search, including how it performs in AI-driven discovery environments, is increasingly part of the validation picture. If your proposition isn’t finding its way into the conversations people are having with search tools, that’s worth factoring in. The way AI citations are shaping search visibility is changing what it means for a proposition to be findable, and that has implications for demand validation work.
The Validated Market Problem
There’s a trap worth naming explicitly. Validating that a market exists is not the same as validating that you can compete in it profitably.
I’ve worked with businesses that had done thorough market validation, confirmed demand, confirmed willingness to pay, confirmed that competitors were succeeding, and then launched into a market where the cost of customer acquisition made the unit economics unworkable. The market was real. Their ability to access it at a viable cost was not.
This is why channel validation matters as much as demand validation. If your customer acquisition cost is higher than your lifetime value at any realistic retention rate, you don’t have a business, regardless of how real the underlying demand is. The paid search landscape in many categories is now competitive enough that late entrants face structurally higher CPCs than incumbents, which means the economics that worked for the category pioneers won’t necessarily work for you. I watched this play out repeatedly when managing large-scale paid search programmes: the brands that entered a category early built quality scores and historical data that gave them a structural cost advantage that newcomers couldn’t easily close.
Go-to-market validation, testing whether you can reach the right audience at a cost that works, needs to be part of the validation process from the start, not an afterthought once the product is built.
The broader discipline of market research, including competitive analysis, audience research, and demand mapping, sits underneath all of this. If you’re building out that capability, the Market Research & Competitive Intel hub is a good place to work through the full framework.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
