Incidence Rate: Why It’s the Number That Kills Survey Budgets
Incidence rate in market research is the percentage of people in a given population who qualify to take your survey. If you’re fielding a study and 1 in 5 respondents meets your criteria, your incidence rate is 20%. That single number determines how long your fieldwork takes, how much it costs, and whether your sample is actually representative of the audience you’re trying to understand.
Most marketers treat it as a procurement detail. It isn’t. It’s a strategic signal that tells you how hard your target audience is to find, and by extension, how well-defined your research brief actually is.
Key Takeaways
- Incidence rate is the percentage of the general population who qualify for your survey. Low IR drives up cost, fieldwork time, and sample bias risk simultaneously.
- A poorly defined screener is the most common cause of artificially low incidence rates. Bad briefs don’t just waste money, they corrupt the data before fieldwork begins.
- Incidence rate below 20% is the point where most panel providers apply cost multipliers. Below 10%, you’re in specialist territory and should be commissioning differently.
- IR is a diagnostic tool, not just a budget line. If your qualifying rate surprises you, your assumptions about the audience were wrong, and that matters beyond the survey.
- Treating incidence rate as a research mechanic rather than a strategic input is how teams spend significant money learning nothing they couldn’t have known in advance.
In This Article
- What Does Incidence Rate Actually Measure?
- How Incidence Rate Affects Cost and Timeline
- Why a Surprising Incidence Rate Is a Research Finding in Itself
- The Screener Is Where Most Studies Go Wrong
- Incidence Rate Benchmarks by Research Type
- How to Estimate Incidence Rate Before You Brief a Study
- When Low Incidence Rate Is the Right Answer
- Incidence Rate and Qualitative Research
- The Strategic Waste Problem Nobody Talks About
If you’re building out a market research capability or trying to get more rigour into how your team commissions studies, the broader market research hub covers the full landscape, from methodology selection to competitive intelligence frameworks.
What Does Incidence Rate Actually Measure?
Incidence rate measures the proportion of a sample that passes your screener and qualifies to complete the survey. If you send invitations to 500 panel members and 80 of them qualify, your incidence rate is 16%.
The screener is the series of questions at the start of a survey that filters out people who don’t fit the research criteria. Those criteria might be demographic (age, income, location), behavioural (purchased a product in the last 90 days, uses a specific software category), or attitudinal (considers themselves a primary decision-maker for IT purchases). The tighter the criteria, the lower the incidence rate, and the more expensive and time-consuming the fieldwork becomes.
Where teams go wrong is treating incidence rate as a fixed feature of the audience rather than a function of how they’ve defined the brief. Early in my agency career, I watched a client spend nearly three times their original fieldwork budget because their screener included six qualifying conditions, two of which were redundant and one of which was based on an assumption about their customer that turned out to be false. The incidence rate came in at 8%. The panel provider had flagged it as a risk. The client had overridden the concern because they were confident in their audience definition. They weren’t.
How Incidence Rate Affects Cost and Timeline
Panel providers price fieldwork based on the cost of finding qualified respondents, not the cost of completing interviews. When incidence rate drops, the provider has to contact more people, process more screener completions, and manage more disqualifications before they reach your target sample size. That cost gets passed on.
The industry broadly applies cost multipliers when incidence rate falls below certain thresholds. A study with a 50% incidence rate is relatively straightforward to field. At 20%, you’re starting to pay a premium. Below 10%, most providers will either decline to quote on standard terms or apply a significant uplift that can double or triple the per-complete cost.
Timeline follows the same logic. A study that would normally complete in three days at 40% incidence might take two to three weeks at 12%, assuming the panel is large enough to support it at all. For research tied to a product launch decision or a quarterly planning cycle, that delay can make the data irrelevant before it’s even delivered.
There’s also a quality dimension that doesn’t show up in the invoice. When respondents go through a long screener only to be disqualified, they become less engaged with future survey invitations. Panel providers manage this carefully, but in practice, very low incidence studies can attract a disproportionate share of respondents who’ve learned to game screeners. That’s a data integrity problem, not just a cost problem.
Why a Surprising Incidence Rate Is a Research Finding in Itself
This is the part most research briefs miss entirely. When your incidence rate comes back lower than expected, that’s not just an operational inconvenience. It’s telling you something about the market.
If you expected 30% of your target panel to be active users of a specific product category and only 11% qualify, one of three things is true: your screener is too narrow, your assumptions about category penetration were wrong, or both. Either way, you’ve learned something commercially significant before a single survey question has been answered.
I’ve seen this play out in category sizing exercises where a client was convinced they were operating in a large, well-penetrated market. The incidence data from a relatively modest study told a different story. The qualifying rate for their core buyer profile was less than half what the internal team had assumed. That finding reshaped the entire go-to-market model, not because the survey data said so, but because the screener data did. The research hadn’t even started yet.
This connects directly to how teams should think about pain point research. If you can’t find the audience you think you’re solving for, the pain point work that follows is built on a false premise. Incidence rate is the earliest possible checkpoint on that assumption.
The Screener Is Where Most Studies Go Wrong
A screener should do one thing: identify whether a respondent qualifies for the study. It should not try to pre-answer research questions, validate hypotheses, or collect data that belongs in the main survey. Every question you add to a screener that isn’t strictly necessary for qualification is a risk to your incidence rate and a cost to your budget.
The most common screener problems I’ve seen in commissioned research are: over-specification of demographic criteria that don’t actually affect the research question, behavioural thresholds set to match the client’s existing customer base rather than the broader category, and leading questions that allow respondents to self-select based on what they think the study wants rather than their actual behaviour.
That last point matters more than most teams acknowledge. If your screener asks “Do you consider yourself an early adopter of new technology?” and then qualifies people who say yes, you’re not measuring early adoption, you’re measuring self-perception. The incidence rate will look fine. The sample will be wrong.
When I’ve judged research-backed submissions for effectiveness awards, the quality of the screener design is almost always a leading indicator of the quality of the insights that follow. Studies with clean, minimal screeners tend to produce cleaner data. The correlation isn’t coincidental.
For B2B research in particular, screener design is where the ICP definition either holds up or falls apart. If your ideal customer profile is well-constructed, translating it into screener criteria is straightforward. If it’s vague or aspirational, the screener will expose that immediately in the form of an unexpectedly low or unexpectedly high incidence rate.
Incidence Rate Benchmarks by Research Type
There are no universal benchmarks, because incidence rate is entirely dependent on how specific the qualifying criteria are. But there are reasonable reference points for different research contexts.
General consumer studies with broad demographic criteria typically run at 50% or above. Category-level studies targeting people who use a particular product type often land between 25% and 50%, depending on category penetration. Studies targeting specific brand users, recent purchasers, or people with defined behavioural characteristics tend to fall between 10% and 25%. Highly specialised studies, such as those targeting niche professional roles, rare medical conditions, or very specific purchase behaviours, can come in below 10% and sometimes below 5%.
B2B research is consistently harder to field than consumer research. Decision-makers are harder to reach through standard panels, job title screening is unreliable because people describe their roles inconsistently, and the universe of qualifying respondents is smaller to begin with. A B2B study targeting IT directors at companies with over 500 employees in a specific vertical is not a standard panel job. It’s a specialist exercise, and it should be commissioned accordingly.
This is also where grey market research becomes relevant. When standard panels can’t deliver the incidence rate you need at a viable cost, teams sometimes turn to alternative data sources or non-standard methodologies. That approach carries its own risks and requires careful interpretation, but it exists precisely because some audiences are genuinely difficult to reach through conventional means.
How to Estimate Incidence Rate Before You Brief a Study
The best time to think about incidence rate is before you write the brief, not after the panel provider comes back with a cost that blows your budget.
Start with the qualifying criteria and work through them sequentially. For each criterion, ask: what proportion of the general population, or the panel population, would I expect to meet this condition? Then multiply the probabilities. If you expect 40% of panel members to be in your target age group, 30% of those to be category users, and 25% of those to have purchased in the last six months, your rough incidence estimate is 40% x 30% x 25%, which gives you 3%. That’s a specialist study, not a standard one.
This kind of back-of-envelope calculation takes ten minutes and can save weeks of renegotiation. It also forces a useful conversation about which criteria are genuinely necessary for the research question and which are there because someone on the brief assumed them without challenging the assumption.
Secondary data can help calibrate these estimates. Category penetration data, government statistics on demographic distributions, and Forrester’s research on buyer behaviour patterns can all provide reasonable reference points. The goal isn’t precision, it’s a defensible estimate that prevents you from commissioning a study that’s structurally undeliverable at the budget you have.
Search engine marketing intelligence is another underused tool for this kind of pre-research calibration. Search volume data gives you a proxy for category interest and audience size. If the search demand for your category is thin, your incidence rate is likely to be thin too. It’s not a substitute for proper estimation, but it’s a useful early signal.
When Low Incidence Rate Is the Right Answer
Not every low incidence rate is a problem to be solved. Sometimes a low qualifying rate is exactly what you need, because the research question only applies to a small, specific population.
If you’re trying to understand the experience of people who have switched from your product to a competitor in the last 90 days, that’s a small population by definition. A high incidence rate would actually be a warning sign, suggesting your screener isn’t doing its job. The question isn’t whether the incidence rate is low, it’s whether it’s low for the right reasons.
The practical implication is that low incidence studies need to be commissioned differently. They typically require specialist panels, longer fieldwork windows, higher per-complete incentives, and sometimes a qualitative component to compensate for the smaller quantitative sample. Running a low incidence study through a standard panel at a standard price point is how you end up with a sample that looks complete on paper but is compromised in ways that don’t show up until you’re presenting the findings.
I’ve seen this problem arise repeatedly in technology consulting contexts, where research briefs are written by strategy teams who understand the business question but don’t understand the fieldwork implications. The alignment between business strategy and research design is rarely as clean as the brief suggests, and incidence rate is often where the gap becomes visible.
Incidence Rate and Qualitative Research
The concept of incidence rate applies to qualitative research too, though it’s rarely framed that way. When you’re recruiting for focus groups or depth interviews, the proportion of people who qualify for the discussion is functionally equivalent to incidence rate in quantitative fieldwork. The lower the qualifying rate, the harder and more expensive the recruitment.
Qualitative recruitment is often more vulnerable to incidence rate problems than quantitative research, because the sample sizes are small and the criteria tend to be more specific. A focus group of eight people who are all category-switchers with specific attitudinal profiles is a much harder recruitment challenge than it appears on the discussion guide. Recruiters will often pad samples with people who partially meet the criteria rather than admit the recruitment is failing. That’s how you end up with a group that doesn’t reflect the audience you were trying to understand.
If you’re using qualitative methods as part of a broader research programme, it’s worth reading about focus group research methods in detail before you brief the recruitment. The methodology only works if the sample is right, and the sample only works if the qualifying criteria are honest about who you’re actually trying to talk to.
One practical approach that works well in mixed-method programmes is to use a quantitative screener to identify qualified respondents and then recruit the qualitative sample from that pool. You get clean incidence data from the quant phase, and you recruit the qual sample from people who have already been verified as meeting the criteria. It costs more upfront, but it eliminates the recruitment integrity problem entirely.
The Strategic Waste Problem Nobody Talks About
There’s a broader point worth making here about how the research industry thinks about efficiency. A lot of attention gets paid to panel quality, data processing, and reporting formats. Very little gets paid to the upstream decisions that determine whether a study was worth commissioning in the first place.
A study with a 6% incidence rate, a poorly designed screener, and a research question that wasn’t tied to a specific decision is expensive in every sense. It costs money to field. It costs time to analyse. It costs credibility when the findings don’t hold up to scrutiny. And it costs the organisation the opportunity to have done something useful with that budget instead.
I’ve spent enough time running agencies and managing client research budgets to know that the most expensive research is usually the research that answers a question nobody was going to act on. Incidence rate is one of the earliest indicators that a study might be heading in that direction. If you can’t clearly articulate who qualifies and why, and translate that into a screener with a defensible incidence estimate, the research question probably isn’t ready to be fielded yet.
The discipline of estimating incidence rate before briefing a study is, in practice, a discipline of clarifying the research question itself. That’s not a methodological nicety. It’s what separates research that informs decisions from research that fills a slide deck.
For teams building a more systematic approach to market research, the full market research resource library covers methodology selection, competitive intelligence, and how to structure research programmes that connect directly to commercial decisions rather than running parallel to them.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
