Voice of the Customer Surveys: What You Ask Matters More Than How Many You Send
Voice of the customer surveys are structured research instruments that capture what customers actually think, need, and experience, as opposed to what internal teams assume they do. Done well, they are one of the most commercially useful tools available to a marketing or product team. Done poorly, they generate a spreadsheet of confirmatory responses that nobody acts on.
The gap between those two outcomes is almost never about survey software. It is about the quality of the questions, the honesty of the analysis, and whether the organisation was genuinely prepared to hear something inconvenient.
Key Takeaways
- The design of your questions determines the quality of your insight. Leading questions produce flattering data, not useful data.
- Voice of the customer surveys fail most often at the analysis stage, not the collection stage. Collecting responses is easy. Interpreting them honestly is harder.
- Surveys should be designed around a specific decision you need to make, not a general desire to “understand customers better.”
- Customer language captured in open-text responses is often more valuable than any quantitative score. It belongs in your messaging, your positioning, and your sales conversations.
- If your survey results consistently confirm what you already believed, the survey was probably designed to do exactly that.
In This Article
- Why Most Voice of the Customer Surveys Produce Comfortable Data
- What Should a Voice of the Customer Survey Actually Measure?
- The Question Design Principles That Actually Matter
- The Segment Question Most Teams Skip
- How to Analyse Voice of the Customer Data Without Fooling Yourself
- The Commercial Value of Customer Language
- When to Run a Survey and When to Do Something Else
- Closing the Loop: What Happens After the Survey
I have spent a significant part of my career working with businesses that were convinced they understood their customers. Sometimes they were right. More often, they understood a version of their customers filtered through internal assumptions, sales team anecdotes, and the loudest voices in the room. Voice of the customer research, when it is properly constructed, cuts through that noise. This article covers how to build surveys that generate genuinely useful insight, and how to avoid the structural mistakes that make most of them useless.
Why Most Voice of the Customer Surveys Produce Comfortable Data
There is a pattern I have seen across dozens of client engagements. A business decides it wants to hear from its customers. A survey goes out. The results come back broadly positive, with a few minor gripes noted. The team feels validated. The survey gets filed. Nothing changes.
This is not a technology problem. It is a design problem, and underneath the design problem is usually a political one. Surveys built by internal teams tend to be built by people who have a stake in the outcome. The questions are framed in ways that make it easy for customers to say positive things. The response options are structured so that neutral sits closer to positive than negative. The open-text questions are placed at the end, after respondent fatigue has set in, so they get skipped.
I ran into this early in my agency career when we were brought in to help a client interpret a customer satisfaction programme they had been running for two years. The scores were consistently high. The business was losing customers at a rate that did not match those scores at all. When we looked at the survey instrument itself, the problem was obvious. Every question was framed around what the business did well. There was no mechanism for customers to articulate why they might consider leaving, or what a competitor was offering that was more compelling. The survey was measuring satisfaction within a narrow frame the business had defined for itself.
Genuine voice of the customer research requires a degree of intellectual courage. You have to be willing to ask questions whose answers you might not like, and you have to be willing to report those answers honestly to people who may not want to hear them.
What Should a Voice of the Customer Survey Actually Measure?
The most useful framing I have found is this: a voice of the customer survey should be designed around a specific decision your business needs to make. Not a general desire to understand customers better. A specific decision.
That decision might be: should we reposition this product for a different segment? Should we change our pricing structure? Are we losing deals at a particular stage because of a messaging problem or a product problem? Is our onboarding experience creating churn in the first 90 days?
When you start with the decision, the survey design becomes much cleaner. You know what you need to know. You can strip out every question that does not contribute to answering that specific question. The survey gets shorter, the responses get more focused, and the analysis becomes more actionable.
When you start with “let’s understand our customers better,” you end up with a 40-question omnibus that asks about everything and illuminates nothing. I have seen these surveys. I have probably commissioned a few of them earlier in my career. They are expensive to run and the outputs are almost impossible to act on because nothing is prioritised.
Forrester’s work on intelligent growth models makes a related point about how organisations translate customer insight into commercial action. The businesses that do this well are the ones that have connected their research programmes to specific strategic questions, not the ones running the most surveys.
The Question Design Principles That Actually Matter
Survey methodology has a fairly well-established set of principles, and most of them get ignored in practice. Here are the ones I return to consistently.
Ask one thing at a time
Double-barrelled questions are everywhere in customer surveys. “How satisfied are you with the quality and speed of our service?” is a single question that is actually two questions. A customer who thinks quality is excellent but speed is poor has no way to answer it accurately. You get a compromise response that tells you nothing useful about either dimension.
Avoid leading language
“How much did you enjoy working with our team?” presupposes that the customer enjoyed it. “How would you describe your experience working with our team?” does not. The difference in response distribution between those two questions is significant. Leading language is often unintentional, but it consistently skews results toward the positive.
Put your most important open-text question second or third, not last
Respondent fatigue is real. If you bury your open-text question at the end of a 15-question survey, you will get one-word answers and blank responses. If the qualitative insight is what you are really after, structure the survey so it appears while the respondent is still engaged. I have seen this single change double the quality of open-text responses in customer programmes we have rebuilt.
Offer a “none of the above” or “not applicable” option
Forcing respondents to choose from options that do not reflect their experience produces inaccurate data. It also frustrates respondents and increases abandonment. If someone does not have a view on a particular question, let them say so.
Test the survey on someone who was not involved in building it
This sounds obvious. It rarely happens. The people who build a survey understand the intent behind every question. A fresh reader will find ambiguities, confusing instructions, and questions that could be interpreted in multiple ways. Those problems need to be fixed before the survey goes to customers, not after you have collected 500 responses you cannot use.
The Segment Question Most Teams Skip
Aggregate survey data is almost always misleading. The average response across your entire customer base will smooth over the differences between your best customers and your worst fits, between long-term accounts and recent acquisitions, between customers who use your product daily and those who have barely touched it.
Before you design a voice of the customer survey, you need to decide which customers you are asking. This is not about excluding inconvenient voices. It is about making sure the insight you collect is relevant to the decision you are trying to make.
If you are trying to understand why customers churn in the first six months, you need to survey churned customers, not your ten-year advocates. If you are trying to understand what drives expansion revenue, you need to survey customers who have expanded, and ideally compare them to customers of similar size and tenure who have not. If you want to understand why you are losing deals to a specific competitor, you need to speak to prospects who chose that competitor, not existing customers who chose you.
The segmentation decision is often where the most valuable insight lives. I spent time working across more than 30 industries during my agency years, and the pattern was consistent: the businesses that got the most value from customer research were the ones that asked precise questions of precisely defined segments, not the ones running broad satisfaction surveys across their entire database.
This connects to a broader point about how go-to-market strategy should be built. If you are thinking about how voice of the customer research fits into your wider growth planning, the articles in the Go-To-Market and Growth Strategy hub cover the commercial architecture that makes research like this actionable rather than merely interesting.
How to Analyse Voice of the Customer Data Without Fooling Yourself
Collection is the easy part. Analysis is where most voice of the customer programmes fall apart.
The most common failure mode is confirmation bias in the interpretation phase. A team collects 300 responses, finds that 68% rated a particular feature positively, and concludes that the feature is working well. They do not look at what the 32% who rated it neutrally or negatively said in the open-text field. They do not cross-tab the results by customer segment to see whether the negative responses are concentrated among a specific cohort. They take the headline number and use it to confirm what they already believed.
Good analysis starts by actively looking for disconfirming evidence. What did the unhappy customers say? What do the customers who are most likely to churn have in common? What themes appear in the negative open-text responses that do not appear in the positive ones? Those are the signals that are most likely to tell you something you did not already know.
The other thing I would push teams to do is read the open-text responses in full before you start coding or categorising them. There is a tendency to jump straight to a thematic analysis framework, which means you are looking for themes you have pre-defined rather than themes that emerge from the data. Reading the raw responses first, without a coding structure, often surfaces things you would not have thought to look for.
Tools like Hotjar can supplement survey data with behavioural signals, which helps you triangulate between what customers say and what they actually do. Both perspectives matter. Customers do not always accurately describe their own behaviour, and behaviour does not always capture the reasoning behind it. The combination is more useful than either alone.
The Commercial Value of Customer Language
There is a specific output from voice of the customer surveys that most teams underuse: the exact language customers use to describe their problems, their needs, and the value they get from your product or service.
This matters because there is almost always a gap between the language a business uses to describe itself and the language its customers use to describe the same thing. Marketing teams tend to write in the language of the business. The best-performing messaging is written in the language of the customer.
I have seen this play out repeatedly in campaign work. We would take phrases lifted directly from customer interviews or open-text survey responses and put them into ad copy or landing page headlines. The performance difference compared to internally generated copy was often substantial, not because the writing was better, but because the language was more resonant. Customers recognise their own words.
When you are coding open-text responses, keep a separate log of verbatim phrases that describe problems or outcomes in specific, concrete terms. Those phrases belong in your messaging framework, your sales scripts, your website copy, and your campaign briefs. They are not just research outputs. They are commercial assets.
BCG’s work on brand and go-to-market strategy touches on the alignment between how organisations present themselves and how customers actually perceive them. Voice of the customer research is one of the most direct ways to close that gap, but only if the language insight is actually fed back into the marketing system.
When to Run a Survey and When to Do Something Else
Surveys are not always the right tool. They are good at collecting structured responses at scale. They are poor at capturing nuance, uncovering unexpected motivations, or understanding complex decision-making processes.
If you need to understand why a specific type of customer is churning, a survey might give you directional data. But a series of 30-minute interviews with churned customers will almost certainly give you richer, more actionable insight. The survey tells you what. The interview tells you why.
If you are trying to understand how customers make purchase decisions in a complex B2B sale, a survey is a blunt instrument. The decision involves multiple stakeholders, extended timelines, and contextual factors that are difficult to capture in a multiple-choice format. Win/loss interviews, conducted by someone who was not involved in the sale, will give you significantly more useful data.
Surveys work well when you need to validate a hypothesis at scale, when you want to track a metric over time, or when you need to compare responses across a large and varied customer base. They work less well when the question you are trying to answer is genuinely exploratory.
The businesses I have seen get the most value from customer research tend to use surveys and qualitative methods in combination. The survey identifies patterns. The interviews explain them. Neither is sufficient on its own.
Vidyard’s research on go-to-market team performance and pipeline generation highlights how revenue teams often have access to more customer insight than they use. The constraint is rarely data collection. It is the translation of insight into commercial action.
Closing the Loop: What Happens After the Survey
The most important question in any voice of the customer programme is not “what did customers say?” It is “what are we going to do differently as a result?”
I have sat in too many debrief sessions where a team reviews survey results, nods along, identifies three or four things that need to change, and then returns to their desks and carries on exactly as before. The survey becomes an exercise in listening rather than an exercise in improvement. Customers notice this over time. If they provide feedback and nothing changes, they stop providing it.
A voice of the customer programme that drives commercial value has three things: a clear decision it is designed to inform, an honest analysis process that does not filter out inconvenient findings, and a defined owner who is accountable for translating the insight into action.
That last point matters more than most teams acknowledge. If the survey results are shared broadly but nobody owns the response, nothing will change. The insight needs to land with someone who has both the authority and the motivation to act on it.
One of the things I observed during my time judging the Effie Awards was that the campaigns which stood out for genuine effectiveness were almost always built on a specific customer insight that had been taken seriously at the strategic level, not just noted and filed. The insight had changed something: the positioning, the channel mix, the message, the product. That is what voice of the customer research is for. Not to confirm what you already know. To change what you do.
If you are building or rebuilding your go-to-market approach and want to understand how customer research fits into a broader commercial framework, the Go-To-Market and Growth Strategy section of The Marketing Juice covers the strategic architecture that makes individual research programmes more than just isolated data collection exercises.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
