Customer Research Methods That Change Decisions
Customer research is the practice of gathering information directly from or about your customers to inform marketing, product, and commercial decisions. The main types span qualitative methods like interviews and focus groups, quantitative methods like surveys and usage data, and behavioural methods like session recording and A/B testing. Each type answers a different question, and the biggest mistake most marketing teams make is defaulting to whichever method feels most familiar rather than choosing the one that fits the decision they need to make.
Done well, customer research reduces the gap between what you think your customers want and what they actually do. Done poorly, it produces slide decks full of charts that confirm existing assumptions and change nothing.
Key Takeaways
- The type of research you choose should be determined by the decision you need to make, not by what your team is already comfortable running.
- Qualitative research tells you why customers behave a certain way. Quantitative research tells you how many and how often. You need both to build a complete picture.
- Behavioural data is more reliable than stated preferences. What customers do is more useful than what they say they would do.
- Most customer research programmes fail not because of methodology, but because findings are never connected to a specific commercial decision or action.
- A small number of well-run customer interviews will often surface more useful insight than a large survey with poorly constructed questions.
In This Article
Why Most Teams Research the Wrong Things
I’ve sat in a lot of research debrief sessions over the years. The pattern is depressingly consistent: a team runs a survey, gets 400 responses, builds a deck with pie charts, presents it to senior stakeholders, and then… nothing changes. The research gets filed. The campaign brief that was written before the research was commissioned goes ahead largely unchanged.
The problem is rarely the research itself. It’s that the research was commissioned without a clear decision attached to it. Someone wanted to “understand the customer better” without specifying what they would do differently depending on what they found. That’s not research. That’s comfort-seeking dressed up as rigour.
Good customer research starts with a question that has a decision attached to it. “Should we reposition this product for a different audience segment?” is a researchable question. “What do our customers think of us?” is not, unless you have a very specific reason for asking and a clear plan for what you’ll do with the answer.
If you want a broader grounding in how research fits into a commercial marketing strategy, the Market Research and Competitive Intelligence hub covers the full landscape, from tools and methods to how to build a programme that actually informs decisions rather than just documenting them.
What Are the Main Types of Customer Research?
Customer research methods fall into three broad categories: qualitative, quantitative, and behavioural. Within each category there are specific methods, each with different strengths, costs, and appropriate use cases. The categories are not mutually exclusive. The most useful research programmes combine methods deliberately.
Qualitative Research
Qualitative research is designed to understand the reasoning, motivations, and language behind customer behaviour. It produces depth rather than breadth. You cannot use it to make statistically valid generalisations about your whole customer base, but you can use it to understand why a segment of customers behaves the way it does, and to surface hypotheses worth testing at scale.
Customer interviews are the most valuable qualitative method and the most consistently underused. A well-structured 45-minute conversation with a current customer, a lapsed customer, or a prospect who chose a competitor will surface more actionable insight than almost anything else you can do. what matters is asking about behaviour and experience, not opinion. “Walk me through the last time you had to solve this problem” is a better question than “What do you look for in a product like ours?”
When I was running an agency and we were pitching for a significant retail client, we spent two days doing informal customer intercepts in their stores before we wrote a single slide. The client had briefed us on a brand awareness problem. What we found was a customer experience problem that no amount of advertising spend was going to fix. We won the pitch partly because we were the only agency that had spoken to an actual customer before walking into the room.
Focus groups are more useful than their reputation suggests, but only when used for the right things. They are good for exploring reactions to creative concepts, testing the language you’re using in messaging, and understanding how customers talk about a category among themselves. They are poor for predicting purchase behaviour, because group dynamics distort individual responses. People moderate their opinions in groups. The loudest voice in the room tends to pull the discussion in one direction.
Ethnographic research involves observing customers in their natural environment rather than asking them to describe their behaviour in a research context. It’s expensive and time-consuming, but for products where context matters, it is often the only way to understand what is actually happening. Customers are notoriously bad at describing their own behaviour accurately, not because they’re dishonest but because most of what we do is habitual and below the level of conscious awareness.
Quantitative Research
Quantitative research is designed to measure. It tells you how many customers hold a particular view, how frequently a behaviour occurs, and whether a difference between two groups is statistically meaningful. It requires larger sample sizes and more structured methodology than qualitative research, but it produces findings you can generalise from and track over time.
Surveys are the workhorse of quantitative customer research. They are cheap to run, scalable, and flexible. They are also routinely done badly. The most common failure modes are leading questions that prime respondents toward a particular answer, question overload that causes fatigue and reduces data quality, and scales that look precise but are measuring nothing meaningful.
A well-designed survey takes longer to build than most teams expect. The question wording, the order of questions, the response scales, and the sample frame all affect the quality of the data you get. If you’re running a survey to inform a significant commercial decision, it is worth getting someone who understands survey methodology to review it before you send it. The cost of a bad survey is not just the time you wasted running it. It’s the decisions you made based on data that was telling you something other than what you thought it was.
Net Promoter Score is the most widely used quantitative customer metric and also one of the most misunderstood. It measures stated likelihood to recommend, which correlates with loyalty in some categories and barely at all in others. It is useful as a consistent tracking metric if you use it correctly and benchmark it against the right comparison set. It is not useful as a proxy for customer satisfaction, marketing effectiveness, or business health on its own. I’ve seen companies celebrate an NPS improvement while losing market share. The number was going up because they’d stopped serving the customers most likely to be dissatisfied, not because the experience had improved.
Customer satisfaction surveys and post-transaction surveys capture feedback at a specific moment in the customer experience. They are most useful when tied to a specific touchpoint, because satisfaction at the point of purchase is often very different from satisfaction three months into using a product. If you’re only measuring satisfaction immediately after a sale, you’re measuring the honeymoon period, not the relationship.
Segmentation research uses quantitative methods to identify distinct groups within your customer base based on attitudes, behaviours, or needs. It is one of the most commercially useful things you can do if you’re trying to prioritise which customers to invest in and how to tailor your approach to different groups. It requires a reasonable sample size and some analytical capability to do well, but the output, a clear picture of which customer segments exist and what drives each of them, is the foundation of almost everything else in marketing strategy.
Behavioural Research
Behavioural research captures what customers actually do rather than what they say they do or would do. It is the most reliable category of customer data for predicting future behaviour, because it is based on observed actions rather than stated intentions.
Web analytics tells you how customers interact with your digital properties: which pages they visit, how long they stay, where they drop off, and which paths lead to conversion. It is a form of customer research that most teams already have access to but rarely use as systematically as they should. The data is there. The question is whether anyone is asking it a coherent question.
Session recording and heatmapping tools like Hotjar’s behaviour analytics software let you watch how individual users interact with a page, where they click, where they scroll to, and where they hesitate or abandon. This kind of data is particularly useful for diagnosing conversion problems, because it shows you what is actually happening on a page rather than requiring you to infer it from aggregate metrics. If you haven’t used session recording before, Hotjar has a free tier that gives you enough data to start identifying patterns on your key pages.
A/B testing is the most rigorous form of behavioural research available to most marketing teams. It allows you to isolate the effect of a single variable on customer behaviour by showing different versions of something to randomly assigned groups. The discipline required to run A/B tests well, defining a clear hypothesis, calculating the sample size you need before you start, and not calling the test early, is more than most teams apply. But when done properly, it is the closest thing to a controlled experiment that marketing has.
The challenge with A/B testing is that it tells you which version performs better, not why. You need qualitative research to explain the mechanism. The combination of behavioural data that surfaces a problem and qualitative research that explains it is considerably more powerful than either method alone. Resources like Unbounce’s conversion research illustrate how behavioural signals and conversion data work together to surface insights that neither source would reveal independently.
Purchase and usage data from your CRM or transaction systems is often the most underutilised source of customer insight available. It tells you what customers actually bought, how often, in what combinations, and when they stopped. If you have a reasonable volume of transaction data and someone who can analyse it properly, you can answer questions about customer lifetime value, churn patterns, cross-sell behaviour, and the difference between your most and least valuable customer segments without running a single survey.
How Do You Choose the Right Method?
The method should follow the question. That sounds obvious, but in practice most teams choose the method first, usually based on what they’ve done before or what’s easiest to commission, and then construct a question that fits it.
A useful heuristic: if you don’t know what you don’t know, start qualitative. Customer interviews will surface the hypotheses worth testing. If you have a specific hypothesis and need to know whether it holds across your customer base, go quantitative. If you want to understand what customers are doing rather than what they think, go behavioural.
Cost is a real constraint, and it’s worth being honest about it. A full segmentation study with a representative sample is not cheap. But six customer interviews cost almost nothing if you do them yourself, and they will often surface more useful insight than a survey commissioned from an agency and presented in a 40-slide deck. I spent the first few years of my career doing exactly that kind of scrappy, direct research because there was no budget for anything else. The discipline of having to find your own answers without a research budget teaches you to ask better questions.
The other factor is time. Qualitative research takes longer to analyse than quantitative research. Behavioural data is often available immediately but requires interpretation. If you need to make a decision in two weeks, a large-scale survey is not your answer. Six customer interviews and a review of your existing analytics data might be.
What Makes Customer Research Actually Useful?
The research methods are the easy part. The hard part is the organisational discipline to use findings to change decisions rather than just to document what you already suspected.
There are a few things that separate research programmes that change decisions from those that don’t.
The decision is defined before the research starts. “We will reposition this product for segment B if the evidence suggests that segment B has the unmet need we think it does” is a research brief. “Let’s understand our customers better” is not.
The findings are connected to a specific owner. Research that is presented to a room and then filed has no owner. Research that is handed to a specific person with a specific responsibility for acting on it has a much higher chance of changing something.
Uncomfortable findings are not smoothed over. I’ve seen research findings reframed in presentation to avoid telling senior stakeholders something they don’t want to hear. It’s understandable. It’s also a complete waste of the research budget. The value of customer research is precisely that it surfaces things the internal team can’t see or won’t say. If you’re going to suppress that, you might as well not bother.
One of the most useful things I did when running a turnaround was commission a customer exit survey for a product that had been declining for two years. The internal team had a clear view on why customers were leaving, a view that conveniently pointed to external market factors rather than anything the business had done. The research told a different story. The product had a specific functional problem that customers found unacceptable, and the business had been aware of it at a technical level but hadn’t prioritised fixing it. The research made that impossible to ignore. Within six months the problem was fixed. Within eighteen months the decline had reversed.
That is what good customer research is for. Not to confirm the strategy you already have. To surface the things you can’t see from inside the building.
The broader context for how customer research fits into a competitive and market intelligence programme is covered across the Market Research and Competitive Intelligence hub, which includes methods, tools, and how to build a research capability that scales with the business rather than being a one-off project.
A Note on Research and the Limits of What Customers Can Tell You
There is a version of the customer research conversation that goes too far in the other direction. The idea that you should simply ask customers what they want and build it for them is a misreading of how customer insight works.
Customers are expert witnesses on their own experience and their own problems. They are not expert witnesses on solutions. If you ask a customer what they want, they will describe a slightly better version of what they already have. They will not describe something they have never encountered. That’s not a criticism of customers. It’s just how human cognition works.
The job of customer research is to surface the problem clearly enough that the people building solutions can do their job. “Customers find the checkout process confusing and abandon at the payment step” is a useful research finding. “Customers want a simpler checkout” is not, because it doesn’t tell you what simple means in this context or why the current experience fails.
The most effective marketing I’ve seen over 20 years has always been grounded in a clear understanding of a specific customer problem, not in a generalised sense of what customers like or dislike. That understanding comes from research. But it comes from research that is asking the right questions in the right way, not from research that is collecting data for its own sake.
If your marketing is consistently underperforming, the most likely explanation is not that your creative is weak or your media plan is wrong. It’s that you don’t have a clear enough picture of what problem you’re solving for which customer. More customer research, done properly, is almost always the right place to start.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
