Voice of the Customer: Stop Collecting Feedback and Start Using It
Voice of the customer best practices come down to one discipline that most teams skip: closing the loop between what customers say and what the business actually changes. Collecting feedback is easy. Acting on it systematically, and in a way that compounds over time, is where most programmes fall apart.
Most VoC programmes are well-intentioned and structurally broken. They generate data, create dashboards, and then quietly inform nothing. The insights sit in a quarterly review deck, get acknowledged, and disappear. What separates the programmes that drive growth from the ones that produce reports is a different relationship between listening and deciding.
Key Takeaways
- Most VoC programmes fail not at collection but at activation: the feedback never reaches the decisions it should inform.
- Qualitative and quantitative signals need to work together. Numbers tell you what is happening; customer conversations tell you why.
- The most useful customer insight often comes unsolicited. Complaint logs, support tickets, and sales call recordings are frequently richer than formal surveys.
- VoC only creates value when it is tied to a named owner and a specific decision. Unowned insight is decorative.
- The best marketing teams use VoC to challenge internal assumptions, not to confirm them.
In This Article
Why Most VoC Programmes Generate Noise Instead of Signal
I have sat in enough agency and client-side planning sessions to recognise the pattern. Someone presents a Net Promoter Score. Someone else shares a customer satisfaction survey from the previous quarter. There is a brief discussion about what the numbers mean. Then the meeting moves on to media budgets. Nothing changes.
The problem is not the data. The problem is that VoC is treated as a reporting function rather than a decision-support function. When I was running iProspect, we grew the team from around 20 people to over 100 across several years. One of the things that became clear early on was that client retention was not primarily a function of performance metrics. It was a function of whether clients felt heard. And “felt heard” was not something you could measure in a satisfaction score. It showed up in renewal conversations, in the tone of emails, in whether clients called you when something went wrong or waited until the contract review.
Formal VoC programmes often miss this entirely because they are designed to measure sentiment at a point in time rather than to understand the texture of the relationship over time. A client who scores you 7 out of 10 consistently is not a satisfied client. They are a client who has not yet found a reason to leave.
There is a broader set of frameworks and tools in the Market Research and Competitive Intelligence hub if you want to think about how VoC fits into a wider research architecture. It belongs alongside competitor analysis, trend monitoring, and category insight rather than sitting in isolation as a customer service metric.
What Does a Useful VoC Programme Actually Look Like?
The mechanics vary by organisation size and category, but the structural requirements are consistent. A VoC programme that drives decisions rather than reports needs four things: the right inputs, a clear synthesis process, named ownership, and a feedback loop back to the customer.
On inputs: the instinct is to default to surveys. Surveys are fine as one signal, but they are a lagging, low-resolution view of customer experience. By the time someone fills in a post-purchase survey, the moment of friction or delight is already weeks old. The richer signals tend to be closer to the actual experience: support ticket language, sales call recordings, social listening, community forums, and direct customer interviews conducted without an agenda.
Tools like Hotjar Engage have made it considerably easier to conduct moderated customer interviews at scale without the overhead of traditional qualitative research. That kind of direct conversation, where you are listening rather than leading, tends to surface things that no survey would ever capture. The specific language customers use to describe a problem is often more valuable than the problem itself, because it tells you how to talk to them about it.
On synthesis: the goal is not to aggregate all feedback into a single score. The goal is to identify patterns that are commercially significant. That means distinguishing between feedback that is frequent and feedback that is important. A small number of customers might be telling you something critical about a product flaw that will eventually affect everyone. A large number of customers might be expressing mild preference about something that does not materially affect retention or acquisition. The synthesis process needs to weight by commercial consequence, not just volume.
On ownership: every insight that is surfaced through VoC needs a named person responsible for determining whether it warrants action. “The team” owns nothing. If the insight touches product, it needs a product owner. If it touches messaging, it needs a marketing owner. If it touches the service experience, it needs an operations owner. Without this, insights accumulate and nothing moves.
On closing the loop: this is the part most programmes ignore entirely. When a customer tells you something that leads to a change, tell them. Not in a generic “we value your feedback” email, but in a specific, honest communication that says: you told us this was a problem, we looked into it, here is what we changed. That single behaviour does more for loyalty than most retention marketing programmes I have seen.
How Do You Balance Qualitative and Quantitative VoC Data?
This is where a lot of marketing teams get into difficulty. They either over-index on quantitative data because it feels objective and defensible, or they over-index on qualitative because the stories are compelling and easy to present. Neither approach on its own is reliable.
Quantitative VoC data tells you what is happening at scale. It tells you that satisfaction scores dropped in a particular segment after a price change, or that NPS is lower among customers who came through a specific acquisition channel. It gives you statistical confidence and allows you to track change over time. What it cannot tell you is why any of this is happening, or what customers actually want you to do about it.
Qualitative VoC data fills that gap. A well-run customer interview programme, even with a relatively small sample, can surface the reasoning behind the numbers in a way that no survey can replicate. The challenge is that qualitative insight is easy to misuse. It is tempting to select the interviews that confirm what you already believe, or to weight the most articulate customer’s view above all others. Good qualitative research requires the same intellectual discipline as good quantitative research: you have to be genuinely open to being wrong.
I judged the Effie Awards for several years, and one of the things that became apparent when reviewing the submissions was how often the most effective campaigns were built on a genuinely unexpected customer insight. Not the insight the brand team expected to find, but the one that emerged from actually listening. The campaigns that were built on assumption, however well-executed, tended to underperform. The data was always there in hindsight. Someone just had not looked for it before briefing the agency.
The practical approach is to use quantitative data to identify where to investigate and qualitative data to understand what you find. A drop in satisfaction scores in a particular region is a prompt to go and talk to customers in that region, not a finding in itself.
Which VoC Inputs Are Most Underused?
Formal surveys and NPS tracking get the most investment and attention. The inputs that tend to be most neglected are also the ones that are frequently most revealing.
Sales call recordings are one of the most underused VoC assets in B2B marketing. Every objection, every question, every moment of hesitation in a sales call is customer insight. The language prospects use to describe their problem before they have been exposed to your messaging is often more honest and more useful than anything you will get from a post-purchase survey. If your sales team is recording calls and your marketing team is not listening to them regularly, you are leaving a significant source of intelligence untouched.
Customer support tickets are another. The specific language of complaints, the frequency of particular issues, the things customers say they expected versus what they got, all of this is a continuous stream of product and experience intelligence. When I have worked with businesses going through turnarounds, the support queue is often the first place I look. It is an unfiltered view of where the product or service is failing to meet expectations, and it is usually far more honest than anything a formal research programme would surface.
Churn interviews are arguably the highest-value VoC input of all, and they are almost never done well. Most businesses either do not conduct them at all, or conduct them in a way that is so clearly defensive that the customer tells them what they want to hear. A genuinely open churn interview, conducted by someone who is not going to use the conversation to save the account, is one of the most commercially useful conversations a business can have. The customers who left have no reason to be polite, and the things they say tend to be precise.
There is also value in paying attention to what customers say about you in spaces you did not create. Community forums, review platforms, social conversations, and even the comment sections of competitor content can surface perceptions and language that your own research would never generate. Understanding what makes your position unpopular with some audiences is as strategically useful as understanding what makes it compelling to others.
How Do You Turn VoC Insight Into Marketing That Works?
This is the commercial question that most VoC frameworks sidestep. They focus on the research process and leave the application to the reader. The application is where the value is.
The most direct application is messaging. When you understand the exact language customers use to describe their problem, their hesitations, and their criteria for choosing a solution, you have a brief. Not a research report, a brief. The words your customers use are the words your marketing should use. This sounds obvious. It is not practised nearly as often as it should be.
I have worked with businesses that had genuinely excellent products but were marketing them in language that made sense internally and meant nothing to the customer. The positioning was built around how the company thought about the product, not how the customer thought about their problem. Every piece of VoC work we did pointed to the same thing: the customer’s frame of reference was different from the brand’s frame of reference, and the gap was costing them conversions. When we realigned the messaging to the customer’s language, the performance improvement was immediate and significant. Not because we changed the product, but because we stopped asking customers to translate.
Beyond messaging, VoC insight should inform channel prioritisation, product development, pricing strategy, and service design. If customers consistently tell you that the post-purchase experience is where trust is built or lost, that is a resource allocation signal. If they tell you that price is not the primary concern but transparency of pricing is, that is a conversion optimisation signal. The Obama campaign’s approach to testing and iteration is a useful reference point here: the principle of running structured experiments based on real audience insight, rather than assumption, applies directly to how VoC findings should be tested and validated.
The businesses I have seen grow most consistently are not the ones with the most sophisticated marketing. They are the ones that are genuinely attentive to what their customers need and relentless about removing the friction between that need and the solution. Marketing in those businesses is not compensating for product or service failures. It is amplifying something that already works. VoC is how you know whether you are in that position or not.
What Are the Most Common VoC Mistakes?
Asking leading questions is the most common methodological failure. Survey design and interview facilitation are skills that require training, and most VoC programmes are run by people who have not had that training. A question like “how much did you enjoy your experience with us?” is not a VoC question. It is a prompt for social desirability bias. Good VoC questions are open, neutral, and focused on the customer’s experience rather than the brand’s desired outcome.
Surveying only happy customers is a structural bias that corrupts entire programmes. If your VoC data is collected primarily at moments of positive engagement, post-purchase, post-renewal, after a successful support resolution, you are building a picture of your best-case experience. The customers who had a poor experience and said nothing, or who left without being asked, are invisible in your data. This is a significant problem if you are using VoC to inform strategic decisions.
Treating VoC as a marketing function rather than a business function is another common mistake. Customer insight that is owned by marketing and never reaches product, operations, or leadership is insight that can only improve marketing. That is useful but limited. The most commercially significant applications of VoC tend to be in product development, service design, and pricing, areas that marketing often does not control. For VoC to drive business outcomes, it needs to be a cross-functional input, not a marketing deliverable.
Finally: over-indexing on your most vocal customers. The customers who respond to surveys, who write reviews, who engage in community forums, are not representative of your customer base. They are the ones with the strongest opinions. That makes their input valuable but not definitive. Structuring your entire VoC programme around the most engaged segment will produce insights that are accurate for that segment and potentially misleading for everyone else.
If you are building out a broader research and intelligence capability alongside your VoC programme, the Market Research and Competitive Intelligence hub covers the full range of methods and frameworks worth having in your toolkit.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
