Focus Groups Are Not Dead. They’re Just Misused.

A focus group is a moderated research session in which a small group of participants, typically six to ten people, discuss a product, service, concept, or message in response to guided questions. The output is qualitative: attitudes, language, reactions, and the reasoning behind them. Used well, focus groups surface things that surveys cannot, specifically the why behind a behaviour, not just the what.

The problem is that focus groups have developed a reputation problem they only partially deserve. They get blamed for producing bad decisions when the real culprit is usually how they were set up, how the findings were interpreted, or what question the business was actually trying to answer in the first place.

Key Takeaways

  • Focus groups generate qualitative insight, not quantitative proof. They tell you what people think and why, not how many people think it.
  • The most common failure mode is using focus groups to validate decisions already made, rather than to genuinely explore unknowns.
  • Moderator quality determines output quality more than any other variable. A leading question in session two can corrupt every finding that follows.
  • Focus groups work best early in a strategic process, when you are still forming hypotheses, not when you are seeking approval for a finished brief.
  • Combining focus group findings with behavioural data produces sharper insight than either method alone.

What Is a Focus Group Actually For?

The confusion starts with the question of purpose. A focus group is not a decision-making tool. It is a hypothesis-generating tool. That distinction matters more than most briefs acknowledge.

When I was running agency teams and we were developing positioning for clients, the most useful focus group sessions were the ones that happened before we thought we knew the answer. Not after. The sessions where a brand team came in with three campaign routes and wanted the group to pick one were almost always a waste of money. You end up with a watered-down consensus, a group dynamic that rewards the loudest voice, and findings that confirm whatever the brand manager already believed.

The sessions that produced something genuinely useful were the exploratory ones. Where the moderator’s job was to get people talking about their lives, their frustrations, their decision-making, with the brand or category as the backdrop rather than the centrepiece. That is when you get the language, the mental models, the unexpected associations that no survey would ever surface.

Market research is a broad discipline, and focus groups sit within it as one of several qualitative methods. If you are building a research programme from the ground up, it helps to understand where focus groups fit relative to other approaches. The Market Research and Competitive Intelligence hub covers the full landscape, from behavioural analytics to search intelligence, which gives useful context for where qualitative research earns its place.

Why Do Focus Groups Get a Bad Reputation?

The criticism of focus groups is not unfounded. There is a long list of decisions that went badly after focus group research suggested they would go well. New Coke is the example that gets cited most often. Consumers said they preferred the new formula in blind tests. They did not behave that way when it launched. The research was technically sound. The question being asked was wrong.

That is the first failure mode: asking focus groups to predict behaviour. They cannot do that reliably. What people say in a moderated group setting, in response to a concept or stimulus, is not the same as what they will do when they are standing in a supermarket aisle, distracted, time-pressured, and choosing from fifteen options. The gap between stated preference and actual behaviour is one of the oldest problems in consumer research, and focus groups sit right in the middle of it.

The second failure mode is social desirability. People in groups moderate their responses. They say what sounds reasonable, what sounds like the kind of thing a thoughtful consumer would say. They do not always say what they actually think, particularly if what they actually think sounds shallow, price-sensitive, or contrary to what the rest of the group seems to believe. The group dynamic itself is a confounding variable.

The third failure mode is confirmation bias on the client side. I have sat behind two-way mirrors and watched clients selectively note the comments that supported their existing view and dismiss the ones that challenged it. The research becomes a prop rather than a tool. The brief was already written. The focus group was there to provide cover.

None of these are arguments against focus groups. They are arguments against using them badly.

What Makes a Focus Group Worth Running?

The quality of a focus group is determined before anyone walks into the room. The setup, the brief, the screener, the discussion guide, and the choice of moderator are where the value is either built or lost.

A clear research question

The most dangerous brief for a focus group is a vague one. “We want to understand how consumers feel about our brand” will produce a two-hour session of pleasantries and a report full of observations that do not connect to any decision. A focused brief, one that specifies exactly what the team needs to know and why, produces sessions that are genuinely useful.

The best research questions I have seen are narrow enough to be answerable and open enough to allow for surprise. Something like: “When people in this category switch brands, what triggers the decision and what language do they use to justify it?” That is a question a focus group can actually help with. It has a specific scope. It is oriented toward behaviour. And it leaves room for the participants to take the conversation somewhere unexpected.

Defining the market opportunity clearly before commissioning research is a discipline that many teams skip. MarketingProfs has written on the challenge of defining market opportunity in a way that is practically useful rather than theoretically tidy, and the same rigour applies to focus group briefs.

A properly constructed screener

The screener is the recruitment questionnaire that determines who gets into the room. It is one of the most under-invested parts of the process and one of the most consequential. A poorly constructed screener fills the group with people who are either too similar to produce useful range, or too different to produce coherent findings.

The classic mistake is recruiting people who are too engaged with the category. Brand loyalists, heavy users, people who actively enjoy talking about the product, will give you articulate, interesting responses that are completely unrepresentative of the broader market. The people you actually need to understand, the light users, the switchers, the indifferent majority, are harder to recruit and less entertaining in session. But they are the ones whose behaviour usually determines commercial outcomes.

Another screener failure: not screening out people who work in marketing, research, or advertising. Respondents with professional knowledge of how research works will perform rather than respond. They know what you are looking for and they will give it to you. That is not insight. That is theatre.

A discussion guide that earns its questions

The discussion guide is not a script. It is a framework. A good moderator uses it as a reference point, not a checklist. But the structure of the guide matters because it determines the order in which topics are introduced, and that order shapes what participants say.

The standard approach is to move from general to specific: start with broad category behaviour, then narrow to brand or product, then introduce specific stimuli or concepts. This prevents the research stimuli from contaminating earlier responses. If you show people your new packaging in the first ten minutes, every subsequent comment about the brand will be filtered through that impression.

The questions themselves need to be genuinely open. Not “did you find the messaging clear?” but “what did you take away from that?” Not “would you buy this?” but “walk me through how you would think about this if you saw it in a shop.” The difference sounds subtle. In practice it is the difference between a group that gives you answers and a group that gives you thinking.

A moderator who knows when to follow the thread

Moderator quality is the single biggest variable in focus group output. A skilled moderator manages group dynamics without suppressing individual voices, follows unexpected threads without losing the structure, and probes without leading. That is a specific and difficult skill set. It is not something a brand manager can do while also managing the client relationship from the back of the room.

I have seen sessions where the moderator’s body language alone was enough to shut down a line of conversation that was getting uncomfortable for the client. A slight lean back. A too-quick pivot to the next question. The group read it immediately and adjusted. The uncomfortable thing was exactly what needed to be explored. It never got explored. The report was sanitised before it reached the strategy team.

Online vs In-Person: Does the Format Change the Output?

The shift toward online focus groups accelerated significantly after 2020, and it has not fully reversed. Online groups are cheaper to run, easier to recruit for, and remove the geographic constraint that used to limit who you could include. They also change the dynamic in ways that are worth understanding before you choose a format.

In-person groups have a physical energy that online groups do not replicate. When something lands, you feel it in the room. When something falls flat, you feel that too. Body language, side conversations, the moment someone shifts in their seat when a concept is shown, these are signals that a video grid cannot capture with the same fidelity.

Online groups, on the other hand, tend to produce more honest responses on sensitive topics. The perceived anonymity of being in your own home, not in a room full of strangers, reduces social desirability bias slightly. Participants are also less likely to be dominated by a single strong personality because the moderator has more control over who speaks when.

Asynchronous online research, sometimes called online qual or digital diaries, is a separate format that is worth distinguishing from a standard focus group. Participants respond to prompts over several days, in their own time, which reduces the group dynamic problem entirely. The output is richer and more considered than a two-hour session, but it requires more analytical work on the back end and a longer project timeline.

The choice of format should follow the research question, not the budget. If you need to understand how people react to something in the moment, in real time, in conversation with others, an in-person group is usually better. If you need considered reflection on habitual behaviour, an asynchronous approach often produces more useful material.

How Many Groups Do You Actually Need?

The default answer from a research agency is “at least four.” Two groups per segment, run twice to check for consistency. That is a reasonable starting point, but it is also a commercial answer as much as a methodological one.

The honest answer is that you need enough groups to reach theoretical saturation, the point at which additional sessions are producing the same themes and no new ones. For a relatively homogeneous audience and a focused research question, that might be two or three groups. For a segmented audience with genuinely different profiles, you might need six or more.

What you should not do is run one group and treat the output as definitive. A single group is a conversation, not a finding. The group dynamic, the specific mix of participants, the moderator’s approach on that particular day, all of these introduce variability that a second group would either confirm or challenge.

I have seen brands make significant creative decisions on the basis of a single focus group. Sometimes it works out. More often, the decision was going to be made regardless of what the research said, and the single group provided just enough cover to justify it. That is not research. That is expensive decoration.

What Should You Do With the Findings?

The analysis and synthesis stage is where most focus group projects lose their value. The raw output of a qualitative session is a transcript, a set of notes, and a moderator’s initial impressions. Turning that into something actionable requires analytical rigour that many project teams underestimate.

Thematic analysis, the process of identifying patterns across sessions and building a coherent interpretation of what they mean, is not the same as summarising what people said. A summary is a record. An analysis is an argument. The best focus group reports I have read make a clear argument about what the findings mean for the business decision at hand, with the evidence to support it and an honest acknowledgement of what the research cannot tell you.

The findings should be connected back to the original research question. If the brief was “what triggers switching behaviour in this category?”, the report should answer that question directly, with the supporting evidence from the sessions, and with a clear statement of confidence. Not “participants suggested that price might be a factor” but “price sensitivity is the primary stated trigger for switching, most commonly activated by a visible promotional offer from a competitor, though the emotional framing participants used suggests that the real driver is a sense of being taken for granted by their current brand.”

That second version is more useful. It is also harder to write. It requires the analyst to have genuinely understood what they heard, not just catalogued it.

How Do Focus Groups Fit Into a Broader Research Programme?

Focus groups work best as part of a broader research architecture, not as a standalone exercise. The most effective programmes I have been involved in used qualitative research to generate hypotheses and quantitative research to test them at scale.

The sequence matters. Qualitative first, quantitative second. Run focus groups to understand the territory, the language people use, the mental models they apply, the tensions they feel. Then design a survey that tests the specific hypotheses those sessions generated, using the actual language participants used rather than the language the brand team prefers.

Reversing the sequence, running a large survey first and then using focus groups to explain the findings, is less efficient. You end up using expensive qualitative time to interpret quantitative data that could have been better designed if the qualitative work had come first.

Behavioural data adds a third layer. What people say in a focus group and what they do when they are actually in market are different things. Combining focus group findings with behavioural signals, from tools like session recording and behavioural analytics, gives you a more complete picture. The qualitative work tells you the reasoning. The behavioural data tells you the reality. The gap between them is often where the most useful insight lives.

BCG has written about how organisations that build genuine insight capability, rather than commissioning one-off research projects, tend to make better strategic decisions over time. The BCG perspective on innovation and growth touches on the structural conditions that allow insight to actually influence decisions, which is a different challenge from generating the insight in the first place.

The Specific Things Focus Groups Are Good At

Rather than defending focus groups in the abstract, it is more useful to be specific about where they earn their place.

Language mining. Focus groups are one of the best ways to understand how an audience actually talks about a category, a problem, or a product. The language participants use spontaneously is more useful for copywriting, messaging, and search strategy than anything a brand team generates internally. When I was working on positioning projects at agency level, the vocabulary we pulled from qualitative sessions consistently outperformed the vocabulary we wrote ourselves. People do not use brand language. They use their own language. Finding it is worth the investment.

Concept development. Early-stage concept testing, before significant production investment has been made, is a legitimate use case. Not to pick a winner, but to understand which elements of each concept are resonating and which are creating confusion or resistance. The goal is to improve the concepts, not to eliminate them.

Understanding decision architecture. How do people actually make decisions in this category? What do they consider first? What do they use as a proxy for quality? What makes them hesitate? These are questions that a focus group can explore in a way that a survey cannot, because the answer requires the participant to think out loud and be probed on their reasoning.

Identifying unmet needs. When participants are given space to talk about their frustrations with a category, they often describe problems that the category has normalised and stopped noticing. That is a useful input to product development and positioning. It is not a brief for a new product. It is a signal worth investigating further.

Audience empathy. This is the least quantifiable benefit and possibly the most important. Sitting behind a two-way mirror and listening to real people talk about their lives, their choices, and their relationship with a category is a different experience from reading a report about them. It creates a kind of understanding that changes how a team thinks about their audience. I have seen brand teams come out of a focus group facility with a fundamentally different relationship to their customer than they had when they walked in. That shift in perspective has commercial value even if it cannot be directly attributed to a specific decision.

The Specific Things Focus Groups Are Not Good At

Equally important is being honest about the limitations.

Predicting market size or purchase intent. Focus groups cannot tell you how many people will buy something. They can tell you what some people think about it. Those are completely different outputs. Using focus group findings to project commercial outcomes is a category error that I have seen cause real damage to launch planning.

Testing price sensitivity. People will not tell you their real price threshold in a group setting. Social desirability, the desire not to appear cheap, pushes responses upward. Price sensitivity research requires different methods, typically conjoint analysis or van Westendorp, neither of which is a focus group.

Evaluating finished creative. Showing a finished television ad to a focus group and asking whether they like it is an expensive way to generate opinions that will not reliably predict how the ad performs in market. Creative testing at that stage requires quantitative methods with proper benchmarks. Focus groups can be useful for diagnostic work, understanding why something is not landing, but not for go or no-go decisions on finished work.

Representing the market. Eight people in a room are not your market. They are eight people in a room. The findings from a focus group programme are directional, not representative. Treating them as representative is the most common analytical failure in qualitative research.

A Note on the Theatre of Research

There is a version of focus group research that exists primarily to manage internal politics rather than to generate genuine insight. The research is commissioned not because the team has a genuine question, but because a decision needs stakeholder cover. The brief is written around the answer. The screener selects for participants likely to respond positively. The discussion guide steers away from uncomfortable territory. The report is written to confirm rather than challenge.

I have seen this done deliberately and I have seen it done without anyone quite realising it was happening. The result in both cases is the same: money spent, a report produced, a decision made that was always going to be made, and a faint sense among the team that the research was not quite right but nobody can say exactly why.

The antidote is to be honest about why you are running the research before you commission it. If the answer is “to validate a decision already made,” that is a legitimate use of a small amount of budget for a very specific purpose. But it is not market research. It is stakeholder management. Calling it market research corrupts both the process and the output.

The same instinct that drives bad focus group briefs drives a lot of other waste in marketing. Campaigns built to demonstrate activity rather than drive outcomes. Measurement frameworks designed to show green rather than find truth. Briefs written to protect the existing strategy rather than interrogate it. The industry has a broader problem with research theatre, and focus groups are just one of its stages.

If you are thinking about where focus group research fits within a wider programme of market intelligence, the Market Research and Competitive Intelligence hub covers the full range of methods, tools, and approaches that sit alongside qualitative research. It is worth reading before you commission anything, because the choice of method should follow the question, and understanding the full toolkit helps you ask better questions.

Getting the Most Out of the Back Room

If you are attending a focus group as an observer rather than a moderator, the way you engage with the session determines how much value you take from it.

The back room is not a place to check email. It is not a place to brief the moderator on what to ask next based on what the client wants to hear. It is a place to listen, to note the moments that surprise you, and to resist the urge to immediately interpret what you are hearing through the lens of what you already believe.

The most useful practice is to keep a running note of the moments that feel unexpected or uncomfortable. Not the moments that confirm your hypothesis. Those you will remember anyway. The moments that challenge it are the ones you need to write down before the post-session rationalisation begins.

After each session, the observer team should debrief before they see the moderator’s notes. What did each person hear? What surprised them? Where did their interpretations differ? That conversation, held honestly, is often more valuable than the formal debrief with the research team. It surfaces the assumptions that the observers brought into the room and tests them against what actually happened.

Building that kind of analytical discipline into a research programme is harder than it sounds. It requires a team culture that genuinely values being wrong, or at least being surprised. That is not a universal condition. But it is the condition under which focus group research produces its best work.

Generating leads and making decisions from consumer insight in the end comes down to the quality of the questions you ask before any research begins. Unbounce has covered the principle of finding simpler solutions to complex problems, which applies as much to research design as it does to conversion strategy. The simplest, most honest research question usually produces the most useful findings.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between a focus group and a survey?
A focus group is a qualitative method that explores attitudes, reasoning, and language through moderated group discussion. A survey is a quantitative method that measures the prevalence of specific responses across a larger sample. Focus groups tell you what people think and why. Surveys tell you how many people think it. They answer different questions and work best when used together, with qualitative research informing the design of quantitative instruments.
How many participants should be in a focus group?
Most focus groups run with six to ten participants. Fewer than six can make the group dynamic feel stilted and reduces the range of perspectives. More than ten becomes difficult for a moderator to manage effectively, and quieter participants tend to disengage. Mini groups of four to six participants are sometimes used for sensitive topics or specialist audiences where recruitment is difficult, and they tend to produce more depth per participant at the cost of range.
Can focus groups be used to test advertising creative?
Focus groups can be useful for diagnostic creative testing, specifically understanding why an execution is not communicating clearly or what elements are creating unintended reactions. They are not reliable for go or no-go decisions on finished creative, because group dynamics, social desirability, and the artificial viewing context all distort responses. Quantitative creative testing with proper benchmarks is better suited to evaluating finished work. Focus groups are most valuable when used earlier in the creative development process, before significant production investment has been made.
What is the main limitation of focus group research?
The main limitation is that focus groups cannot reliably predict behaviour. What people say in a moderated group setting, in response to concepts or stimuli, is shaped by social desirability, group dynamics, and the artificial context of the session. The gap between stated preference and actual behaviour is well documented and persistent. Focus groups generate hypotheses and surface reasoning. They should not be used to project purchase intent, estimate market size, or make go or no-go decisions without corroborating evidence from behavioural or quantitative sources.
How much does a focus group cost to run?
Costs vary significantly by market, format, and supplier. In-person groups in major markets typically cost more to run than online groups, primarily because of facility hire, travel, and higher participant incentives. A full focus group programme of four to six groups, including recruitment, moderation, analysis, and reporting, commonly runs into five figures. Online groups reduce some costs but require a moderator experienced in managing digital sessions. The most common budget mistake is underspending on moderation and analysis while overspending on facilities, which inverts the priority. The analytical output is where the value is generated.

Similar Posts