Market Validation: Stop Building What Nobody Wants
Market validation is the process of testing whether real demand exists for a product, service, or idea before committing significant resources to it. Done properly, it replaces assumption with evidence, and it is one of the most commercially valuable things a marketing team can do before a launch.
Most businesses skip it, rush it, or confuse it with market research. Those are three different failure modes, and they all lead to the same place: money spent on something the market never asked for.
Key Takeaways
- Market validation tests whether demand actually exists before you build, not after. Reversing that order is one of the most expensive mistakes in marketing.
- Surveys and focus groups measure stated intent, not real behaviour. The gap between what people say they will do and what they actually do is where most validation exercises fall apart.
- A small, well-structured test with real money on the line will tell you more than months of internal debate or polished research decks.
- Validation is not a one-time gate. It should happen at multiple stages: concept, pricing, channel, and message.
- The goal is not to confirm your hypothesis. It is to find out if you are wrong before the cost of being wrong becomes unrecoverable.
In This Article
- Why Most Validation Fails Before It Starts
- What Market Validation Actually Tests
- The Problem With Surveys and Focus Groups
- How Behavioural Validation Works in Practice
- Validation at the Pricing Layer
- Channel Validation: The Part Everyone Skips
- When Validation Gives You a Bad Answer
- Building a Validation Process That Actually Gets Used
Why Most Validation Fails Before It Starts
The most common failure in market validation is not a methodology problem. It is a motivation problem. Teams go into validation wanting to confirm a decision that has already been made internally. The brief is written, the roadmap is set, the budget is allocated. Validation becomes a box-ticking exercise rather than a genuine test of commercial viability.
I have sat in enough agency briefings and client strategy sessions to recognise this pattern immediately. The question being asked is not “should we do this?” It is “can you help us justify doing this?” Those are fundamentally different briefs, and conflating them produces research that is expensive, politically useful, and commercially worthless.
Good validation starts with intellectual honesty about what you are trying to find out, and genuine willingness to act on an answer you did not want. Without that, you are not validating anything. You are performing validation.
If you want a broader framework for where validation sits within market research and competitive intelligence, the Market Research and Competitive Intel hub covers the full landscape, including how to structure research before you even reach the validation stage.
What Market Validation Actually Tests
Validation is not a single test. It is a series of distinct questions, each of which needs its own method and its own honest answer.
The first question is whether the problem exists at meaningful scale. Not whether a problem exists at all, which is easy to confirm with a handful of conversations, but whether enough people experience it acutely enough to pay for a solution. That is a much harder question, and it requires more than qualitative interviews with people who already agree with you.
The second question is whether your solution is the right answer to that problem. This is where most product teams get into trouble. They validate the problem and then assume their solution is self-evidently correct. It rarely is. Customers have a problem. They do not necessarily have your solution as the answer.
The third question is whether people will pay what you need them to pay. Willingness to pay is one of the most misunderstood variables in marketing. People consistently overstate it in surveys and underdeliver on it in practice. The only reliable test of willingness to pay is an actual transaction, or something close enough to one that the stakes feel real.
The fourth question is whether you can reach the right people efficiently enough for the economics to work. A validated product with no viable acquisition channel is not a validated business. Channel validation is part of market validation, and it is the part most teams leave until after launch.
The Problem With Surveys and Focus Groups
Surveys and focus groups are the most commonly used validation tools and the least reliable. That is not a fringe position. It is well understood by anyone who has spent time watching what people say versus what they do.
When you ask someone in a survey whether they would buy a product, they are answering a hypothetical. They have no skin in the game. There is no cost to saying yes, and there is a natural social incentive to be agreeable, especially in a group setting. Focus groups amplify this problem because dominant voices shape group consensus and people moderate their real opinions to avoid conflict.
The MarketingProfs piece on evaluating marketing lists makes a point that applies equally here: the quality of the data matters more than the quantity. A hundred survey responses telling you what you want to hear are worth less than ten real transactions telling you something you did not expect.
This does not mean qualitative research has no role. It has a very important role in the early stages of problem discovery. Talking to customers is how you learn what questions to ask. But it is not how you validate whether the answer is commercially viable. For that, you need behavioural evidence.
How Behavioural Validation Works in Practice
Behavioural validation means creating conditions where the person has to make a real decision, not just express an opinion. It does not have to be a full product launch. It just has to involve some form of genuine commitment.
The simplest version is a landing page test. You describe the product or service, you present a price, and you ask people to sign up, pre-register, or purchase. You drive traffic to it through paid channels and you measure conversion. You do not need to fulfil every order to learn something valuable. You need to find out whether people, when faced with an actual decision, behave the way your survey data predicted they would.
Early in my career, I ran a paid search campaign for a music festival at lastminute.com. The campaign was not complicated. The targeting was straightforward, the creative was functional rather than clever, and the offer was clear. Within roughly a day, it had generated six figures of revenue. That result was not because we had done months of validation research. It was because the demand was already there and we put a relevant offer in front of people who were actively looking. The market validated itself in real time, through real transactions, with real money.
That experience taught me something I have never forgotten: a clean offer in front of the right audience beats a beautifully researched strategy every time. Validation is in the end about finding out whether that clean offer exists. Paid search is one of the fastest ways to find out, because the intent signal is already there.
Tools like Optimizely’s framework for input metrics and KPIs provide a useful lens for structuring what you measure during these tests. The point is not to track everything. It is to identify the one or two signals that tell you whether the core hypothesis is holding up.
Validation at the Pricing Layer
Pricing is where validation gets uncomfortable, because it is where the gap between stated intent and real behaviour is widest.
Most teams set a price based on competitive benchmarking, cost-plus modelling, or gut instinct, and then validate the product at that price. The problem is that the price is not a separate variable. It is part of the proposition. A product that converts at £29 per month may completely fail at £49, not because the product changed but because the perceived value equation shifted.
Proper pricing validation means testing at multiple price points, ideally with real transactions rather than survey-based conjoint analysis. It means being willing to find out that your target price point does not work, and having a plan for what you do with that information.
I have seen businesses spend twelve months building a product, launch it at a price that made the unit economics work on paper, and then watch conversion rates come in at a fraction of what was projected. The validation had been done on the product. Nobody had validated the price. They are not the same exercise.
Channel Validation: The Part Everyone Skips
A product with validated demand and a validated price can still fail commercially if there is no efficient way to reach the people who want it. Channel validation is the test that most teams defer until after launch, at which point the cost of getting it wrong has already compounded.
Channel validation asks a specific question: can we acquire customers at a cost that makes the economics work? It is not enough to know that a channel exists or that competitors are using it. You need to know your cost per acquisition at your conversion rate with your creative, not someone else’s numbers from a case study.
When I was building out performance marketing at iProspect, we were managing hundreds of millions in ad spend across dozens of categories. One of the clearest patterns I saw was the gap between clients who had done genuine channel validation before scaling and those who had assumed their category economics would mirror what they had read about. The ones who had tested first scaled more predictably and wasted far less budget finding out what did not work.
Paid search remains one of the cleanest channel validation tools available, because it captures existing demand rather than trying to create it. If people are searching for what you offer and they are not converting on your landing page, you have a conversion problem. If nobody is searching, you have a demand problem. Those are different problems with different solutions, and paid search tells you which one you have faster than almost anything else. The history of quality-based pricing in search marketing is a useful reminder of how quickly the economics of paid channels can shift, which is another reason to validate early rather than assume historical benchmarks will hold.
Social channels add a different dimension to validation. They are better for testing message resonance and creative than for measuring true purchase intent, because the audience is not in an active search mindset. That does not make them useless for validation. It means you need to be clear about what you are testing and what the signal actually means.
When Validation Gives You a Bad Answer
This is the part of the validation conversation that most articles skip, because it is uncomfortable. What do you do when the test tells you the market is not there?
There are three honest responses. The first is to accept the finding and stop. This is the commercially rational response and the least common one, because it requires overriding sunk cost thinking and internal politics.
The second is to iterate. A bad result on one configuration of the proposition does not necessarily mean the underlying idea is wrong. It might mean the price is wrong, the message is wrong, the channel is wrong, or the segment is wrong. Iteration means running a different test with a different configuration, not rerunning the same test and hoping for a different result.
The third is to reframe the question. Sometimes a validation test fails because the question being tested was too narrow. The market does not want your solution in the form you presented it, but it might want something adjacent. That is a useful finding, not a failure, provided you are honest about what it tells you.
What is not a valid response is to dismiss the test as flawed and proceed as planned. I have watched that happen more times than I care to count, usually in organisations where the validation was commissioned to satisfy a process rather than to genuinely inform a decision. The test gets blamed for producing the wrong answer, and the launch goes ahead on the original timeline. Sometimes it works out. More often, it does not.
Building a Validation Process That Actually Gets Used
The practical challenge with market validation is that it takes time and resources that most teams feel they cannot spare, especially when there is pressure to launch quickly. The solution is not to do less validation. It is to build a validation process that is fast enough and lightweight enough to fit into the actual pace of the business.
That means starting with the highest-risk assumptions, not the easiest ones to test. Every proposition has a load-bearing assumption: the one thing that, if it turns out to be wrong, makes everything else irrelevant. Find that assumption first and test it first. Do not spend three weeks validating the onboarding flow before you have tested whether anyone wants to onboard at all.
It also means being ruthlessly clear about what constitutes a pass. Before you run the test, define the number. What conversion rate, what cost per acquisition, what volume of sign-ups would tell you the hypothesis is holding up? If you do not define it in advance, you will find a way to interpret any result as broadly encouraging, which is not validation. It is wishful thinking with a spreadsheet attached.
User behaviour tools can play a useful role in the iteration phase of validation. Understanding where people drop off on a landing page, which elements they engage with, and where the friction sits gives you something to act on. The Hotjar acceptable use framework is a good reference point for understanding the boundaries of behavioural data collection, particularly if you are running tests across markets with different privacy expectations.
Experiment collaboration matters too, particularly in larger teams where multiple people have a stake in the outcome. Optimizely’s experiment collaboration tooling reflects a broader truth: validation tests fail not just because of bad methodology but because of poor coordination between the people running the test and the people who need to act on the results.
Early in my career, when I could not get budget for a new website, I built it myself. That experience was not just about resourcefulness. It was about testing whether a thing could exist before asking anyone else to invest in it. The principle is the same in market validation: do the smallest version of the test that can produce a meaningful signal, and do it before you ask for the resources to go further.
For more on how validation fits within a broader intelligence-gathering process, including competitive analysis and trend monitoring, the Market Research and Competitive Intel hub has the full context. Validation does not exist in isolation. It is one layer of a broader picture, and the other layers matter too.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
