Voice of the Customer: What Most Strategies Get Wrong

A voice of the customer strategy is a structured approach to capturing what your customers actually think, need, and feel about your product or service, then using those insights to shape decisions across marketing, product, and commercial strategy. Done well, it replaces assumption with evidence and gives your team a shared language for what the market is telling you.

Done badly, it becomes a research exercise that sits in a deck, gets referenced in a quarterly review, and changes nothing.

Key Takeaways

  • VoC programs fail most often because the insights never reach the people with budget authority or decision-making power.
  • The most useful customer signals are rarely in your CRM. They live in support tickets, sales call notes, and review platforms.
  • Asking customers what they want is less useful than understanding why they stayed, switched, or nearly left.
  • Sentiment scores and NPS are lagging indicators. By the time they move, the underlying problem is already embedded.
  • A VoC strategy without a defined activation path is just market research with a fancier name.

I have run agencies where we built VoC programs for clients across retail, financial services, and B2B technology. The pattern was consistent: the clients who got the most value were not the ones with the most sophisticated research methodology. They were the ones who had already decided they were going to act on what they found, before the research started.

Why Most VoC Programs Produce Insights Nobody Uses

There is a structural problem with how most organisations approach voice of the customer. Research is commissioned by marketing. The findings are presented to marketing. Marketing writes a report. The report goes to stakeholders who were not involved in designing the research, do not fully trust the methodology, and have their own instincts about what customers think.

Nothing changes.

I have sat in enough post-research debriefs to know that the question “what are we going to do differently as a result of this?” is asked far less often than it should be. The research becomes a reference point rather than a catalyst. Teams cite it when it confirms what they already believed and quietly set it aside when it does not.

This is not a research quality problem. It is a process design problem. If you build a VoC program without defining what decisions it will inform and who has authority to act on it, you are building a library, not a strategy.

If you want a broader grounding in how market research fits into commercial strategy, the Market Research and Competitive Intel hub covers the full landscape, from customer insight through to competitive positioning.

Where Customer Signals Actually Live

Most VoC strategies default to surveys. Surveys have their place, but they are a structured prompt for a response, which means you are already shaping what you hear. The customer is answering your question, not telling you what is actually on their mind.

The more valuable signals are usually unstructured and already exist inside your business. Support tickets contain the language customers use when something has gone wrong. Sales call recordings contain the objections that never make it into the CRM. Churn interviews, when conducted honestly, contain the real reasons someone left, which are almost never the reasons recorded in the exit survey.

Review platforms are particularly underused. Customers writing reviews are not trying to help your research team. They are expressing something they feel strongly enough to put into words without being asked. That is a different quality of signal than a prompted satisfaction score. Tools like Hotjar can help surface behavioural signals on-site, but the richest qualitative data often comes from channels you are not monitoring at all.

Early in my career, I worked with a client who was convinced their core customer issue was pricing. Every internal meeting circled back to it. When we actually mapped the support ticket language and ran a series of customer interviews, pricing barely came up. The real issue was onboarding. Customers who struggled in the first thirty days left. Customers who got through that window stayed for years. The business had been optimising for the wrong variable because nobody had looked at what customers were actually saying.

The Difference Between Asking and Listening

Asking customers what they want is a reasonable starting point. It is not a reliable endpoint. People are not always able to articulate what they need, and they are even less reliable at predicting what they would value in the future. This is not a criticism of customers. It is a basic limitation of self-reported data.

The more productive questions are retrospective. Why did you choose us over the alternative? What nearly stopped you from buying? If you were going to leave, what would be the reason? These questions surface decision-making logic rather than wish lists.

The distinction matters because it changes what you do with the answers. “What features would you like?” produces a backlog. “What nearly made you leave?” produces a retention strategy. One is aspirational. The other is diagnostic.

There is also a difference between what customers say and what they do. Behavioural data, session recordings, funnel drop-off points, and conversion patterns tell you something about actual decision-making that stated preferences cannot. A customer might tell you they value price transparency and then convert on a page that buries the pricing. Behaviour is a more honest signal than a survey response, and a complete VoC strategy uses both.

What NPS and Satisfaction Scores Are Actually Telling You

NPS became the default VoC metric for a reason. It is simple, comparable over time, and easy to report upward. The problem is that it is a lagging indicator of sentiment that has already formed. By the time your NPS score drops, the experience that caused it happened weeks or months ago, and the customers most affected may have already left.

Satisfaction scores have the same limitation. They measure how a customer felt after an interaction, not whether that interaction was shaping their long-term relationship with the brand. A customer can rate a support call highly and still churn because the underlying product problem was never resolved.

I judged the Effie Awards for several years, reviewing campaigns that were built on genuine customer understanding versus those that were built on assumptions dressed up as insight. The campaigns that held up were almost always grounded in a specific, uncomfortable truth the brand had uncovered about its customers. Not a flattering truth. Not a truth that confirmed the brand’s existing positioning. A truth that required the brand to change something.

That kind of truth rarely surfaces from a satisfaction score. It comes from qualitative work: interviews, ethnographic observation, longitudinal tracking of behaviour over time. The score tells you there is a problem. The qualitative work tells you what it actually is.

Forrester’s work on how markets and customer expectations shift is a useful reminder that even well-designed VoC programs need to account for context. What customers value is not static, and a program built on last year’s priorities may be measuring the wrong things entirely.

How to Build a VoC Program That Actually Feeds Decisions

The structure of a functional VoC program is less complicated than most organisations make it. The challenge is not the methodology. It is the discipline to connect what you learn to what you do.

Start with the decisions you need to make. Not the questions you are curious about. The actual commercial decisions that are sitting on the table: which segment to prioritise, which product feature to build next, which message to lead with, which retention lever to pull. Define those decisions first, then design the research to inform them.

This sounds obvious. It is not how most programs are built. Most programs start with a research methodology and then look for decisions to attach to. That is backwards, and it produces insights that are interesting but not actionable.

Once you have defined the decisions, map the data sources that already exist inside your business before you commission any new research. Support tickets, sales call notes, churn records, review platform data, session behaviour. Most organisations have more customer signal than they have processed. Mining what you already have is faster and cheaper than generating new data, and it often surfaces the most honest picture of customer experience because it was not designed to be reported upward.

Then layer in primary research where the existing data has gaps. Interviews for depth. Surveys for scale. Behavioural testing for validation. The mix depends on what you are trying to understand, not on what your research budget defaults to.

The activation path is where most programs break down. Every insight that comes out of the program should have a named owner, a defined decision it informs, and a timeline. If an insight does not have those three things, it is not ready to leave the research phase. A finding without an owner is just an observation.

Optimizely’s 2024 B2B ecommerce research is a useful reference point for how customer expectations are shifting in complex buying environments. The gap between what B2B buyers expect and what most vendors deliver is significant, and it is the kind of gap that a well-run VoC program should be surfacing before it shows up in churn data.

The Metrics That Matter More Than Sentiment Scores

If you are running a VoC program, you need to know whether it is working. That means measuring the program itself, not just the customer sentiment it tracks.

The metrics worth tracking are: how many decisions were informed by VoC insight in a given quarter, how many product or service changes were made as a direct result, and whether the issues identified in the research actually moved when the changes were made. These are outcome metrics, not activity metrics.

Activity metrics, how many surveys were sent, how many interviews were conducted, what the response rate was, tell you about the health of the research operation. They do not tell you whether the research is creating value. The distinction between vanity metrics and outcome metrics applies to internal programs just as much as it applies to marketing campaigns.

One thing I have found useful is a simple quarterly review where the VoC lead presents not the insights from the research, but the decisions that were made because of it. That framing shifts the conversation from “what did we learn?” to “what did we do?” It also creates accountability for the parts of the business that received the insights and were expected to act on them.

When VoC Reveals a Problem Marketing Cannot Fix

This is the part of the conversation that most VoC frameworks skip over, because it is uncomfortable. Sometimes the customer insight that comes back is not a message problem or a targeting problem. It is a product problem, a pricing problem, or an experience problem that sits well outside marketing’s remit.

I have held the view for a long time that marketing is often used as a blunt instrument to prop up businesses with more fundamental issues. A company that genuinely delighted customers at every meaningful touchpoint would need far less marketing than most organisations run. The VoC program that surfaces a fundamental product gap is doing exactly what it should do. The question is whether the organisation has the appetite to hear it.

When I was leading an agency through a growth phase, we ran a VoC exercise on our own business. Not for a client. On ourselves. What came back was not comfortable. Clients valued our strategic thinking but found our reporting slow and inconsistent. We had been optimising for the quality of the work and under-investing in the experience of receiving it. We changed the reporting structure and the communication cadence. Retention improved. That is what a VoC program is supposed to do.

If your VoC program consistently produces insights that are comfortable and confirm what you already believed, either your customers are unusually satisfied or your research design is producing the answers you wanted to hear. Both are worth examining.

There is more on how to structure research and competitive intelligence so it actually informs strategy, rather than just documenting the landscape, in the Market Research and Competitive Intel hub. The principles that make VoC useful are the same ones that make any market research worth the investment.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a voice of the customer strategy?
A voice of the customer strategy is a structured process for capturing, analysing, and acting on what customers think, feel, and need. It draws on multiple data sources including surveys, interviews, behavioural data, and unstructured feedback, and connects those insights to specific commercial decisions rather than treating research as a standalone activity.
How is voice of the customer different from standard market research?
Standard market research often focuses on market sizing, competitive positioning, or category trends. Voice of the customer is specifically focused on the experience, language, and decision-making of your existing and prospective customers. It tends to be more qualitative, more longitudinal, and more directly connected to product and experience decisions than traditional market research.
What are the most common reasons VoC programs fail?
The most common failure is that insights are never connected to decisions. Research is conducted, findings are presented, and the organisation moves on without changing anything. Other common failures include designing research to confirm existing beliefs, relying too heavily on satisfaction scores that lag behind actual customer experience, and failing to involve the people with budget authority in the research design process.
Is NPS a reliable measure of customer sentiment?
NPS is a useful directional indicator, but it is a lagging metric. By the time a score shifts, the experiences driving it have already happened, and the customers most affected may have already left. It works best as one signal among several, combined with qualitative research that explains why the score is moving rather than just confirming that it has.
How often should a VoC program be updated?
There is no single right answer, but a program that only runs annually is unlikely to keep pace with how customer expectations shift. A more effective model combines always-on listening through existing channels like support tickets and review platforms, with periodic primary research tied to specific decision cycles, such as product planning, pricing reviews, or campaign strategy.

Similar Posts