Focus Groups: How to Run One That Tells You Something

Leading a focus group well is harder than it looks. The mechanics are simple enough: gather a small group of people, ask them questions, listen to what they say. But the gap between a session that produces genuine insight and one that produces comfortable noise is almost entirely down to how the group is designed, recruited, and facilitated.

Done well, a focus group surfaces the kind of textured, contextual understanding that no survey can replicate. Done badly, it confirms whatever the brief writer already believed and gives the client a reason to move forward with the wrong decision.

Key Takeaways

  • A focus group is only as good as its recruitment. Homogeneous groups produce consensus, not insight.
  • The moderator’s job is to create conditions for honest conversation, not to guide participants toward a predetermined answer.
  • Discussion guides should be structured around topics and tensions, not a list of questions to work through sequentially.
  • What people say and what they do are often different things. Focus groups reveal language, framing, and attitude, not behaviour.
  • The analysis stage is where most focus group value is lost. Themes need to be interrogated, not just reported.

I’ve sat in on more focus groups than I can count, on both sides of the glass. Early in my career I moderated them myself. Later, as an agency CEO, I watched clients use them to validate decisions they’d already made. The difference between those two experiences taught me more about qualitative research than any methodology training did.

What Is a Focus Group Actually For?

Before you plan a single session, you need to be honest about what a focus group can and cannot do. It can give you depth on attitudes, perceptions, language, and emotional response. It can surface tensions and contradictions that quantitative data doesn’t capture. It can help you understand why people behave the way they do, not just that they do.

What it cannot do is tell you what will happen at scale. A group of eight people in a room is not a representative sample. It is a lens, not a measurement. If you go into a focus group expecting statistical validation, you will either be disappointed or, worse, you will misread the output as something more definitive than it is.

The best use of focus groups I’ve seen is in combination with quantitative research, where the qual explains the quant. You know from your data that a segment is converting poorly. The focus group tells you why they’re hesitating. That combination is genuinely powerful. The focus group alone, used to decide whether a campaign idea “works,” is a much riskier proposition.

If you’re building out a broader market research programme, it’s worth reading through the Market Research and Competitive Intelligence hub for context on how qualitative methods fit alongside other tools and approaches.

How Do You Design a Focus Group That Produces Useful Output?

Design starts with the research question, not the logistics. Before you book a venue or write a discussion guide, you need a precise statement of what you’re trying to understand. Not “what do customers think about our brand” but something specific: “Why do lapsed customers not return after their first six months?” or “What language do buyers use when they describe this category problem to colleagues?”

The more specific your research question, the more useful your output. Vague questions produce vague answers, and vague answers produce slide decks full of observations that nobody acts on.

Once you have a clear question, you can make sensible decisions about everything else: who to recruit, how many groups to run, what stimulus material to use, and how to structure the discussion.

Who Should You Recruit, and Why Does It Matter So Much?

Recruitment is where most focus groups fail before they even start. The temptation is to recruit people who are easy to reach, engaged with your brand, or articulate and confident. Those people will give you a smooth, productive session. They will also give you a distorted picture of reality.

The most valuable participants are often the ones who are hardest to recruit: the disengaged, the churned, the people who considered you and chose someone else. These are the people whose perspective will actually challenge your assumptions.

I worked with a retail client once who had run a series of focus groups with their loyalty programme members. The output was broadly positive, which was reported as validation of their brand positioning. When we dug into the data, we found that loyalty members represented a small and shrinking share of total revenue. The people they needed to understand were the casual buyers who purchased twice a year and felt no particular attachment to the brand. Those people had never been in a focus group. The research had been measuring the wrong audience entirely.

Screener questionnaires need to be designed carefully. The goal is to identify people who genuinely fit your target profile, not people who are good at answering screener questions. Professional focus group participants, people who attend sessions regularly for the incentive, are a known problem in qualitative research. A well-designed screener includes behavioural questions that are harder to game than attitudinal ones.

Group composition also matters. Mixing people with very different levels of category knowledge or very different power dynamics in the same group will suppress honest responses. Someone who works in the industry you’re researching will dominate a group of general consumers. Someone who is significantly older or more senior than the rest of the group will shift the conversation toward their frame of reference. Segment your groups deliberately, not by convenience.

How Do You Write a Discussion Guide That Works?

A discussion guide is not a questionnaire. That distinction matters more than most people realise. A questionnaire is a list of questions to be answered in sequence. A discussion guide is a map of territory to explore, with room for the conversation to go where it needs to go.

Structure your guide around topics and tensions rather than questions. For each topic, you might have two or three questions as prompts, but the moderator should feel free to follow a thread if something interesting emerges, come back to the guide later, and skip questions that have already been answered organically.

The opening section of any focus group should be low-stakes and designed to warm participants up. Ask about their general relationship with the category, their habits, their context. This is not wasted time. It establishes the conversational norm that you want honest, specific answers rather than polite generalities, and it gives you baseline information that makes everything that follows more interpretable.

The middle section is where the substantive work happens. This is where you explore attitudes, probe contradictions, and introduce any stimulus material such as concepts, creative executions, or messaging frameworks. Be careful with the order in which you introduce stimulus. Once participants have seen a polished creative execution, it is very hard to get them back to their unprimed attitudes. Show stimulus late, not early.

The closing section should give participants a chance to summarise their perspective in their own words. Questions like “if you were explaining this to a friend” or “what would need to be different for you to feel differently” often produce the most quotable and useful output of the entire session.

One practical note: a good discussion guide for a 90-minute session should have enough material for about 120 minutes. You will not get through all of it, and that is fine. The guide is a safety net, not a script.

What Does Good Moderation Actually Look Like?

Moderation is a skill that takes years to develop properly, and it is consistently undervalued by clients who assume that anyone who is comfortable in front of a group can do it. The mechanics of keeping a conversation going are not the hard part. The hard part is staying genuinely neutral while the conversation is happening.

A moderator who nods encouragingly at certain answers, who follows up on comments that confirm the brief and lets contradictory ones pass, or who phrases probes in ways that signal the desired response, will contaminate the data without anyone in the room noticing. This is not usually deliberate. It is a natural human tendency to seek confirmation, and it is exactly why professional moderators train hard to suppress it.

There is a useful framing I came across years ago in a piece on how great communicators listen before they speak. The principle applies directly to moderation. The moderator’s job in the room is to listen at a level most people never reach in ordinary conversation, not just to the words but to what is being avoided, what is being qualified, and what is generating visible discomfort or energy in the group.

Specific moderation techniques worth knowing:

  • The pause. Silence after an answer is one of the most powerful tools in a moderator’s kit. Most people will fill silence with more information, often more honest information than their initial response.
  • The echo. Repeating the last few words of what someone said as a question (“you said you felt frustrated?”) invites elaboration without leading.
  • The redirect. When one participant is dominating, bringing others in explicitly (“does anyone else have a different experience?”) redistributes the conversation without embarrassing the dominant voice.
  • The probe. “Can you tell me more about that?” is almost always more useful than a follow-up question that interprets what the participant just said.
  • The devil’s advocate. Introducing a contrary position as something “some people say” rather than as your own view allows you to test the robustness of group sentiment without signalling a preferred answer.

One thing I always pushed for when I was commissioning qualitative research was a moderator who had no prior relationship with the client or the brief. The closer a moderator is to the business, the harder it is to stay neutral. This is also why having the brand team in the room during moderation, rather than behind the glass, is generally a bad idea. Participants adjust their answers when they sense they’re being evaluated by someone with a stake in the outcome.

How Do You Handle Group Dynamics That Derail the Session?

Every focus group has at least one difficult dynamic. The most common ones are the dominant participant who talks over everyone else, the compliant group that agrees with whatever the moderator seems to want, and the hostile participant who has decided the session is a waste of their time.

The dominant participant is manageable with deliberate redirection and, if necessary, explicit acknowledgement: “That’s really useful, I want to make sure we hear from everyone before we move on.” Most dominant participants are not trying to be difficult. They are engaged and enthusiastic. Channelling that energy rather than suppressing it is usually the right approach.

The compliant group is more dangerous because it looks like a successful session. If everyone is agreeing, if there is no friction, if the discussion is moving smoothly through the guide, that is often a sign that participants are performing rather than responding honestly. A well-designed projective technique, asking participants to describe the brand as a person, or to sort cards representing different attitudes, can break the compliance pattern by giving people a less direct way to express genuine views.

The hostile participant is rare but real. The best approach is usually to acknowledge their scepticism directly, without making it a confrontation, and to make clear that honest negative feedback is as valuable as positive feedback. Occasionally, a participant who starts hostile becomes the most useful voice in the room once they believe you actually want their unvarnished view.

How Do You Analyse and Report Focus Group Findings Without Losing the Nuance?

Analysis is where most focus group value is lost. The raw output of a focus group is a transcript, a set of notes, and a moderator’s impressions. Turning that into something useful requires a disciplined approach to thematic analysis, not just a summary of what was said.

Start with the transcript, not your notes. Notes taken in the room are already filtered through your existing assumptions. The transcript is closer to the raw data. Read it without your guide in front of you and mark the passages that surprise you, contradict each other, or use language you hadn’t anticipated. These are usually the most valuable moments.

When you identify themes, be specific about the evidence. “Participants expressed concern about price” is weaker than “five of the eight participants mentioned price unprompted, and three of those connected it specifically to perceived risk rather than affordability.” The specificity matters because it determines how much weight the finding should carry in decision-making.

Be honest about what you didn’t find as well as what you did. If your research question was about barriers to trial and the focus group produced no strong signal on that topic, that is a finding. It might mean the barrier is elsewhere, or it might mean the question needs to be asked differently. Either way, it belongs in the report.

I’ve judged marketing effectiveness work at the Effie Awards, and one of the most consistent weaknesses in submissions is the gap between the insight claimed and the research that supposedly produced it. The insight is often genuinely good. The research described in the submission would not have produced it. That gap matters because it means the insight was probably intuitive rather than evidenced, which means it might be right, but nobody tested whether it was. Focus groups are one of the tools that can close that gap, if the analysis is done honestly.

Reporting should distinguish clearly between what participants said (direct evidence), what the moderator observed (interpretive evidence), and what the analyst concludes (inference). Mixing these three levels of evidence without labelling them is one of the most common ways focus group reports mislead their readers.

The Forrester perspective on change management is relevant here in an indirect way: findings that challenge existing beliefs are only useful if the organisation is capable of acting on them. If you know the client will dismiss anything that contradicts their current strategy, the most important thing you can do in the report is frame the challenging findings in terms of business risk, not just research output.

What Are the Most Common Focus Group Mistakes?

Running too few groups is probably the most common structural mistake. A single group of eight people is not a reliable basis for any conclusion. The standard minimum for a qualitative programme is three to four groups per segment you’re researching, enough to distinguish consistent themes from individual variation. Clients often push back on this because of cost. The honest response is that one group is better than no research, but it should be treated as exploratory rather than conclusive.

Using focus groups to test creative executions in isolation is another common misuse. Creative testing in a focus group setting almost always produces conservative, risk-averse feedback. People are reluctant to say they like something unusual in front of strangers. They default to what seems safe and sensible. The history of advertising is full of work that tested poorly and performed brilliantly, and work that tested brilliantly and performed terribly. Focus groups can tell you how people are framing a creative idea. They cannot reliably predict whether it will work in market.

Confusing articulate participants with representative ones is a subtler problem. People who are comfortable speaking in groups, who have clear opinions and express them well, will naturally dominate focus group output. Their views are not more valid than those of quieter participants. A good moderator actively creates space for less vocal voices, because those voices often represent a larger and less visible segment of the actual audience.

There is also the broader issue of treating focus group output as behavioural prediction. What people say they will do and what they actually do are consistently different things. This is not a flaw in the methodology. It is a feature of human psychology. The value of a focus group is in understanding how people think and feel, not in forecasting what they will do. If you need behavioural prediction, you need a different research method.

For anyone building out a more comprehensive approach to understanding customers and competitors, the full range of tools and frameworks covered in the Market Research and Competitive Intelligence section is worth working through systematically.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

How many people should be in a focus group?
Most focus groups work best with six to ten participants. Fewer than six and you risk the conversation being dominated by one or two voices. More than ten and the group becomes difficult to manage, with quieter participants disengaging entirely. Eight is a common target because it gives you enough diversity of perspective while remaining manageable for a single moderator.
How long should a focus group session last?
Between 90 minutes and two hours is the standard range for a consumer focus group. Shorter than 90 minutes and you rarely get past surface-level responses into the more nuanced territory where the useful insights live. Longer than two hours and participant fatigue starts to affect the quality of responses. B2B focus groups with senior participants often run shorter, around 60 to 75 minutes, because of the practical constraints on participants’ time.
Should the client team attend the focus group?
Observing from behind a one-way mirror or via a live video feed is fine and often valuable. Being in the room with participants is generally counterproductive. Participants adjust their responses when they sense someone with a stake in the outcome is present, even if that person says nothing. If the client team does observe, they should be briefed beforehand not to draw conclusions from individual comments and to wait for the full analysis before forming views.
How do you recruit participants for a focus group?
Professional recruitment agencies are the most reliable route for consumer research. They maintain panels of potential participants and can screen against detailed criteria. For B2B research, recruitment is harder and often requires direct outreach through professional networks or client databases. In either case, a well-designed screener questionnaire is essential. It should include behavioural questions that verify participants actually fit your target profile, not just attitudinal questions that anyone can answer correctly.
What is the difference between a focus group and an in-depth interview?
A focus group is a group discussion designed to surface how people think and talk about a topic in a social context. The group dynamic is part of the methodology: participants respond to each other, not just to the moderator. An in-depth interview is a one-to-one conversation that removes social pressure and allows for deeper exploration of individual attitudes and experiences. Focus groups are better for understanding shared norms, language, and group-level sense-making. In-depth interviews are better for sensitive topics, complex decision-making processes, or situations where social desirability effects would distort group responses.

Similar Posts