IDI Market Research: What One-on-One Interviews Reveal

IDI market research, short for in-depth interviews, is a qualitative research method where a researcher conducts one-on-one conversations with individual participants to explore attitudes, motivations, and decision-making in depth. Unlike surveys or focus groups, IDIs give you unfiltered access to how a single person actually thinks, without the social dynamics that distort group settings.

The format sounds simple. The insight it produces rarely is.

Key Takeaways

  • IDIs surface the reasoning behind decisions, not just the decisions themselves, which makes them uniquely valuable for strategy work.
  • Focus groups create social pressure that flattens honest responses. IDIs remove that pressure entirely.
  • The quality of an IDI depends almost entirely on the quality of the discussion guide and the interviewer’s ability to probe without leading.
  • IDIs work best early in a research programme, before you have hypotheses to test, or late in one, when you need to explain quantitative findings that don’t make sense.
  • Twelve to fifteen well-recruited IDIs will often tell you more than a 500-person survey if the questions you’re answering are about motivation, not frequency.

I’ve commissioned a lot of research over the years. Some of it was genuinely useful. A fair amount of it was expensive confirmation of things we already suspected, dressed up in a PowerPoint with confidence intervals. The research that consistently delivered the most commercial value was almost always qualitative, and more often than not, it came from individual interviews rather than group settings. Not because IDIs are inherently superior to everything else, but because the questions that matter most in marketing tend to be about why people do things, and that’s a question you can only really answer in a one-on-one conversation.

Why the Interview Format Changes Everything

There’s a version of market research that exists to make decisions feel safer rather than to make them better. You’ve seen it. The survey that asks customers to rate their satisfaction on a scale of one to ten. The omnibus study that confirms your target audience uses social media. The focus group where three confident people talk over seven quieter ones, and the moderator writes up the loudest opinions as “key themes.”

IDIs are structurally different because they remove the audience. When someone is sitting across from an interviewer, or on a video call with them, there’s no social performance happening. There’s no peer group to impress, no contrarian to react against, no consensus to drift toward. The participant is just talking, and a skilled interviewer is just listening and probing.

That structural difference produces a different quality of data. You get contradictions. You get people saying one thing and then correcting themselves ten minutes later. You get the hesitation before the honest answer. None of that survives a group setting, and none of it shows up in a survey. If you’re doing serious qualitative work, the comparison between focus groups and other research methods is worth understanding before you choose your approach, because the tradeoffs are real and they affect what you can credibly conclude.

The research discipline around IDIs is well-established. Behavioural analytics tools like Hotjar’s behaviour analytics suite can tell you what people do on a website. IDIs tell you why they did it, why they almost didn’t, and what they were thinking when they stopped.

What IDIs Are Actually Good For

The method has specific strengths. Using it for the wrong type of question wastes money and produces misleading outputs.

IDIs are well-suited to:

  • Exploring decision-making processes. How did someone actually choose a vendor, a product, a service? What triggered the search? What nearly killed the deal? What did they tell their boss to justify the decision? These are narrative questions. They need a conversation, not a checkbox.
  • Understanding emotional and social context. People rarely make purely rational decisions, but they rarely admit that in group settings either. An IDI creates the conditions for more honest reflection.
  • Generating hypotheses before quantitative research. If you’re about to run a large survey, IDIs can tell you whether you’re asking the right questions in the right language. I’ve seen surveys fail not because the sample was wrong but because the options given to respondents didn’t match how they actually think about the category.
  • Explaining anomalies in quantitative data. When a number doesn’t make sense, an IDI programme can often tell you why. The data says one thing. The interviews explain the human behaviour behind it.
  • Sensitive topics. Anything involving money, health, professional failure, or social status is better explored one-on-one. People will not say certain things in a group. They often will in a private conversation.

IDIs are less suited to measuring frequency, prevalence, or statistical significance. If you need to know what percentage of your market does something, you need quantitative research. If you need to know why they do it, you need IDIs.

The broader market research landscape covers a lot more ground than any single method. If you’re building a research programme from scratch, the Market Research and Competitive Intel hub covers the full range of approaches, from primary qualitative methods through to competitive intelligence and secondary research.

How to Design an IDI Programme That Produces Usable Insight

Most IDI programmes fail at one of three points: recruitment, the discussion guide, or analysis. Getting the method right in the room means nothing if you’ve recruited the wrong participants or you’re asking questions that lead people toward the answers you already expect.

Recruitment. The single biggest determinant of IDI quality is who you talk to. “Customers” is not a recruitment brief. You need to define the specific segment, the specific behaviour, and the specific context you’re researching. If you’re trying to understand why enterprise buyers choose one technology vendor over another, you need to talk to the people who made that decision, not people who were adjacent to it. Screener questions matter. Don’t let a research agency fill quotas with whoever is available on the panel.

Early in my agency career, I made the mistake of commissioning a research project where the recruitment brief was too broad. We got participants who were technically in the right industry but hadn’t actually experienced the decision process we were researching. The interviews were pleasant and completely useless. The insight we needed was sitting with a much smaller group of people who were harder to reach. The lesson was simple: the difficulty of recruiting the right person is usually proportional to how valuable their perspective will be.

The discussion guide. A good discussion guide is not a list of questions. It’s a map of the territory you want to explore, with open-ended prompts that allow the conversation to go where the participant takes it. The worst guides are the ones that read like surveys, with closed questions and predetermined response options. The interviewer’s job is to probe, not to collect answers to fixed questions.

Structure the guide around the participant’s experience, not your product or service. Start with their context, their role, their situation. Move into the decision or experience you’re researching. Probe motivations and emotions. End with implications and future behaviour. The product or brand you’re researching should appear late, not early, so you understand the person before you introduce your agenda.

Analysis. Qualitative analysis is where a lot of IDI programmes fall apart. Thematic analysis done properly is rigorous and time-consuming. What often happens instead is that someone reads through the transcripts, picks out the quotes that match what the client expected, and calls it insight. That’s not analysis. It’s confirmation bias with a research budget attached.

Proper thematic analysis involves coding transcripts systematically, looking for patterns across participants, paying attention to what’s absent as much as what’s present, and being genuinely willing to report findings that contradict the brief. The most valuable IDI finding I ever received told a client that their core proposition was solving a problem their customers didn’t actually have. It was an uncomfortable debrief. It saved them from a significant product investment that would have failed.

IDIs in B2B Contexts: Where the Method Gets Interesting

In B2B marketing, IDIs do something that almost no other research method can do well: they map the buying group. Enterprise purchase decisions involve multiple stakeholders, and each one has a different frame of reference, different concerns, and different definitions of success. A survey can’t capture that complexity. A focus group can’t put a CFO and a technical lead in the same room and expect honest answers from either of them.

Running IDIs across different roles in the same buying organisation produces a picture of how decisions actually get made, which is almost never the linear process that sales teams assume. Someone is blocking. Someone is championing. Someone is indifferent but has veto power. IDIs surface that dynamic in a way that quantitative research cannot.

This connects directly to how you define and score your ideal customer. If you’re working on ICP scoring for B2B SaaS, IDIs with current customers and lost deals are one of the most reliable inputs you can use. The attributes that show up consistently in interviews with your best customers are the ones worth building your scoring model around, not the firmographic proxies that look clean in a spreadsheet but don’t actually predict fit.

Forrester’s research on channel partner selling makes a related point: channel partners sell to people too, and understanding the human decision-making context matters as much in indirect sales as in direct. IDIs with channel partners and their customers can reveal friction points that never appear in CRM data.

There’s also a pain point dimension to this work that goes beyond the standard buying process. Understanding what keeps your prospects up at night, in their own words, is the foundation of positioning that lands. The work of pain point research for marketing services is one area where IDIs consistently outperform every other research method, because pain is contextual, emotional, and rarely expressed accurately in a survey.

Connecting IDI Findings to Broader Intelligence Work

IDIs don’t exist in isolation. The most commercially useful research programmes combine qualitative and quantitative methods, primary and secondary research, and internal and external data sources. Where IDIs fit in that architecture depends on what questions you’re trying to answer and at what stage of the process.

One area where IDI findings become particularly valuable is when they’re combined with competitive and market intelligence. Understanding how your customers perceive competitor positioning, where they see gaps, and what they wish existed in the market is intelligence that no secondary source will give you. It’s also the kind of intelligence that informs strategy at a level that paid search data or social listening simply can’t reach.

I’ve run programmes where IDI findings directly shaped how we approached search engine marketing intelligence. When you know the specific language your customers use to describe their problems, the search terms they’re likely to use, and the content that would actually be useful to them at each stage of a decision, keyword strategy becomes a much more precise exercise. You’re not guessing at intent. You’re working from documented evidence of how people think.

There’s also a connection to the less formal intelligence that organisations often overlook. Grey market research covers the kind of intelligence that sits outside formal research programmes: industry forums, analyst commentary, informal conversations, and secondary sources that don’t come with confidence intervals but often contain more honest signal than commissioned research. IDI findings, when combined with grey market intelligence, can give you a surprisingly complete picture of a market without a six-figure research budget.

At the strategic level, IDI insight feeds into planning frameworks that go well beyond marketing. When I’ve worked with technology businesses on strategy, the qualitative research layer, including IDIs with customers, prospects, and churned accounts, has consistently produced the most useful inputs for strategy alignment and SWOT analysis work. The difference between a SWOT that drives decisions and one that sits in a drawer is usually the quality of the underlying intelligence. IDIs, done properly, raise the quality of that intelligence significantly.

How Many IDIs Do You Actually Need

This is the question that comes up in almost every research briefing. There’s a concept called theoretical saturation, the point at which additional interviews stop producing new themes. In practice, for a reasonably well-defined research question within a single audience segment, that point tends to arrive somewhere between ten and fifteen interviews.

The number goes up if you’re researching multiple segments, multiple markets, or a topic with significant complexity. It goes down if your research question is narrow and your recruitment is precise. What it should never be is a round number chosen because it fits the budget before the brief has been written. I’ve seen research programmes specify “twenty IDIs” before anyone has defined what they’re trying to learn. That’s not a research design. It’s a line item.

The right number of IDIs is the number required to reach saturation on the questions that matter. That’s a function of the research design, not the budget. If the budget constrains the sample below that threshold, the honest response is to narrow the research question rather than run an underpowered programme and present the findings as if they’re representative.

Content strategy faces a similar problem. AI has changed content operations significantly, but the underlying question of what your audience actually needs to hear hasn’t changed. IDIs remain one of the most reliable ways to answer it.

The Practical Mechanics of Running IDIs Well

A few operational points that tend to make the difference between IDI programmes that produce actionable insight and ones that produce an expensive report nobody reads.

Record everything. Memory is unreliable, and note-taking during an interview splits attention. With participant consent, record the session. Transcripts can be produced quickly now with AI-assisted tools, and the ability to go back to the original language matters enormously in analysis.

Keep the interviewer separate from the analyst. Ideally, someone other than the interviewer does the primary thematic analysis. The interviewer develops intuitions during the fieldwork that can colour their reading of the transcripts. Fresh eyes on the data catch things that the interviewer has normalised.

Brief the client on what IDIs can and cannot do. The most common source of disappointment in qualitative research is a client who expected statistical generalisability and got rich narrative insight instead. These are different things. Both are valuable. Managing the expectation upfront prevents the debrief conversation where someone asks “but is this statistically significant?” about a twelve-person interview programme.

Translate findings into decisions. The output of an IDI programme should not be a transcript summary. It should be a set of implications: what this means for the product, the positioning, the messaging, the channel strategy. Research that doesn’t connect to a decision is just expensive documentation. The most useful research debriefs I’ve sat in ended with a short list of things we were going to do differently as a result.

LinkedIn data on B2B audience behaviour, like the kind covered in Buffer’s LinkedIn statistics resource, can tell you what content formats resonate and when people are active. IDIs can tell you what those people actually care about when they’re making decisions that matter. The two sources of intelligence are complementary, not competing.

If you want a broader view of how IDIs fit within a full research and competitive intelligence programme, the market research section of The Marketing Juice covers the range of methods and how they connect to commercial strategy.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What does IDI stand for in market research?
IDI stands for in-depth interview. It refers to a one-on-one qualitative research conversation between a trained interviewer and a single participant, designed to explore attitudes, motivations, and decision-making in depth. The format is distinct from focus groups, which involve multiple participants simultaneously, and from surveys, which collect structured responses at scale.
How many in-depth interviews do you need for valid qualitative research?
For a well-defined research question within a single audience segment, ten to fifteen IDIs typically reaches the point of theoretical saturation, where additional interviews stop producing new themes. The number increases if you’re researching multiple segments or a complex topic. The right number is determined by the research design, not the budget. Running fewer interviews than the question requires and presenting the findings as representative is a common and significant research error.
What is the difference between an IDI and a focus group?
The core difference is social context. Focus groups involve multiple participants, which creates group dynamics: social pressure, dominant voices, and the tendency for individuals to moderate their responses based on what others say. IDIs remove that dynamic entirely. Participants in a one-on-one interview are more likely to express contradictions, admit uncertainty, and discuss sensitive topics honestly. IDIs are generally better for exploring individual motivation and decision-making. Focus groups can be more efficient for exploring reactions to concepts or stimuli across a range of perspectives simultaneously.
When should you use IDIs instead of a survey?
Use IDIs when your primary question is about why people do something rather than how many people do it. Surveys measure frequency, prevalence, and statistical patterns. IDIs explore reasoning, motivation, emotional context, and the narrative behind decisions. IDIs are also the right choice when you need to generate hypotheses before designing a survey, when you’re researching a sensitive topic, or when quantitative data has produced findings you need to explain. The two methods are complementary, not interchangeable.
How do you analyse the findings from in-depth interviews?
Rigorous IDI analysis involves systematic thematic coding of transcripts, identifying patterns across participants, and paying attention to what is absent as well as what is present. The process should be conducted by someone who can approach the data without a predetermined conclusion, which is why separating the interviewer from the analyst is good practice. The output should not be a collection of quotes. It should be a structured set of themes, with implications for the decisions the research was commissioned to inform. Analysis that selectively surfaces quotes matching the client’s expectations is not analysis. It is confirmation bias with a methodology attached.

Similar Posts