Customer Research Incentives: What Gets People to Talk
Customer research incentives are the rewards or compensation you offer participants in exchange for their time, opinions, and insight. Done well, they increase response rates, improve the quality of data you collect, and ensure you’re hearing from the right people, not just the most available ones. Done poorly, they attract low-quality respondents, introduce bias, and waste the budget you spent on the research itself.
Most marketers treat incentives as an afterthought, a line item tacked onto a research brief rather than a strategic decision. That’s a mistake that quietly undermines the entire exercise.
Key Takeaways
- The incentive you choose signals what kind of respondent you’ll attract. Cash draws volume; relevant rewards draw quality.
- Over-incentivising is as damaging as under-incentivising. When the reward is too large, people participate for the money, not because they have something useful to say.
- B2B research requires a different incentive logic than B2C. Senior decision-makers rarely respond to gift cards. Time respect and professional reciprocity matter more.
- Your incentive structure should match the depth of insight you need. A two-minute survey and a 90-minute interview are not the same ask.
- Respondent quality is a research design problem, not just a recruitment problem. Incentives are one variable among several.
In This Article
- Why the Incentive Decision Matters More Than You Think
- What Types of Incentives Are Available?
- How Do You Set the Right Incentive Level?
- The B2B Incentive Problem Is Different
- Incentives and Research Bias: What to Watch For
- Qualitative vs. Quantitative: Incentive Logic Differs
- Incentives in the Context of Broader Research Strategy
- Practical Principles for Getting Incentives Right
I’ve commissioned and overseen a significant amount of customer research across my career, running agencies, managing client programmes across 30 industries, and trying to understand what was actually driving performance versus what we assumed was driving it. The pattern I kept seeing was that companies invested heavily in research methodology and almost nothing in thinking carefully about who they were attracting to participate in it. The incentive structure was usually whatever felt reasonable at the time.
Why the Incentive Decision Matters More Than You Think
There’s a version of customer research that produces genuinely useful insight and a version that produces the illusion of insight. The difference often comes down to who ended up in the sample.
If your incentive is a £5 Amazon voucher, you will hear disproportionately from people with time on their hands and a low threshold for effort. That’s not always the wrong group, but it’s rarely the full picture. If your incentive is a £200 cash reward for a 20-minute survey, you’ll attract a different problem: professional survey-takers who have learned to give answers that keep them eligible for the next study.
Neither extreme gives you what you came for. The goal is to attract people who have a genuine relationship with your product, service, or category, and who will give you honest, considered responses rather than fast or flattering ones.
Our market research hub covers the full landscape of research methods and tools. This article focuses specifically on the incentive layer, which tends to get far less attention than it deserves.
What Types of Incentives Are Available?
Incentives broadly fall into a few categories, and each has a different effect on who responds and how they respond.
Monetary Incentives
Cash, bank transfers, PayPal, and digital wallets are the most direct form of compensation. They’re flexible, universally valued, and easy to administer. The risk is that they attract participation for the wrong reason, particularly at higher values. When you’re paying £150 for an hour of someone’s time, a meaningful portion of respondents will be there for the £150, not because they have relevant experience to share.
That said, monetary incentives are often appropriate and necessary. For hard-to-reach professional audiences, for lengthy qualitative interviews, and for research that requires genuine effort from participants, fair financial compensation is not just acceptable, it’s respectful.
Gift Cards and Vouchers
Gift cards are the most common incentive in consumer research. They feel slightly less transactional than cash, and they’re easy to distribute digitally. The challenge is relevance. A generic retailer voucher says nothing about your relationship with the respondent. A voucher for a brand or category adjacent to your research can actually reinforce the right respondent profile.
If you’re researching a software product and you offer an Amazon voucher, you’re not filtering for people who care about software. If you offer a credit toward a relevant tool or platform, you’re more likely to attract people who actually use that category.
Charitable Donations
Offering to donate to a charity on behalf of the participant removes the personal financial element entirely. This works well in B2B contexts where accepting personal payments can be complicated by company policy, and in research where you want to signal that the exercise is about genuine improvement rather than data extraction. The response rates are typically lower than cash equivalents, but the quality of responses is often higher.
Product Credits and Early Access
For existing customers, offering product credits, subscription extensions, or early access to new features can be more compelling than cash. It also self-selects for people who value your product, which is usually exactly who you want to hear from. This approach works particularly well for SaaS businesses and subscription models where the incentive has real perceived value but low marginal cost.
Recognition and Professional Visibility
In B2B research, particularly with senior audiences, the opportunity to be cited as a contributor to published research, to receive a copy of findings before public release, or to be acknowledged in a report can carry genuine appeal. This works because it offers something that cash doesn’t: professional credibility and access to information they’d find useful.
How Do You Set the Right Incentive Level?
There’s no universal formula, but there are principles that hold across most research contexts.
Start with the ask. A 5-minute survey requires a different level of compensation than a 90-minute in-depth interview. A rough benchmark that many research practitioners use is to think about what a fair hourly rate looks like for the audience you’re targeting, and then apply that proportionally. For a general consumer audience, £10 to £15 for 30 minutes of effort is broadly reasonable. For a senior professional audience, the calculus changes significantly.
When I was running agency new business research in the mid-2000s, we tried to get senior marketing directors to complete a 45-minute telephone interview about their agency relationships. The incentive was a £50 voucher. The response rate was embarrassing. Not because the voucher was too small in absolute terms, but because it was the wrong signal entirely. A £50 voucher said “we think your time is worth less than a dinner out.” What actually moved the needle was offering to share a summary of the aggregated findings. That was information they genuinely wanted, and it positioned the research as a professional exchange rather than a transaction.
For B2B research specifically, if you’re trying to understand how companies evaluate and select marketing services, the pain point research methodology around that category suggests that professional reciprocity often outperforms financial incentives at the senior level. People will talk to you if they believe you’ll give them something useful in return.
The B2B Incentive Problem Is Different
B2B research has a structural challenge that consumer research doesn’t. The people you most need to hear from, the actual decision-makers, are the hardest to reach and the least likely to respond to standard incentives. A procurement director at a mid-market manufacturer is not going to complete your survey for a £25 voucher. And even if they do, they’re probably not giving you their best thinking.
When you’re trying to build or refine an ICP scoring model for a B2B SaaS business, the research that underpins it needs to come from real decision-makers, not from people who happen to have the right job title on LinkedIn and a habit of completing surveys. The incentive structure is part of how you ensure that.
In B2B contexts, I’ve found three things that actually work. First, a clear and credible value exchange: tell participants what they’ll get from the research, not just what you’re taking. Second, brevity as a form of respect: a tightly scoped 20-minute interview signals that you’ve done your homework and won’t waste their time. Third, the right recruitment channel: cold survey invitations rarely work at the senior level. Warm introductions, industry associations, and existing customer relationships are far more effective, even if they’re slower to build.
Incentives and Research Bias: What to Watch For
The incentive you offer doesn’t just affect who participates. It affects how they participate.
There’s a well-documented tendency for people to give more positive responses when they feel they’re being compensated. The reasoning is partly social: if someone has paid you to participate, there’s a subtle pressure to be helpful and agreeable rather than critical. This is particularly relevant in customer satisfaction research, where the goal is to surface genuine friction and dissatisfaction, not to collect positive feedback that makes everyone feel good about the product.
The way to mitigate this isn’t to remove incentives entirely. It’s to design the research so that critical feedback is explicitly invited and structurally easier to give. Anonymous surveys, third-party research facilitation, and question framing that normalises negative responses all help. Tools like Hotjar’s research resources offer practical guidance on survey design that reduces social desirability bias, which is worth reviewing if you’re running in-product research at scale.
The other bias to watch for is self-selection. When you rely on voluntary participation, you systematically under-represent the people who are least engaged with your product. They’re the ones who won’t bother completing a survey regardless of the incentive. This is a real limitation of incentivised research that’s worth acknowledging rather than pretending away. Complementary methods, including behavioural data, grey market research approaches, and passive observation, can fill in some of what voluntary research misses.
Qualitative vs. Quantitative: Incentive Logic Differs
The incentive approach for a large-scale quantitative survey is not the same as for a small-sample qualitative study, and conflating the two creates problems in both directions.
For quantitative research, you’re optimising for volume and representativeness. The incentive needs to be attractive enough to hit your sample size targets without being so attractive that it distorts the sample. Modest, consistent incentives work well here. The goal is to reduce friction, not to create a financial motivation to participate.
For qualitative research, the calculus is different. You’re asking for significantly more from each participant: time, depth of thinking, willingness to be challenged and probed. The incentive needs to reflect that. More importantly, the recruitment process for qualitative research should be more rigorous, because the quality of insight depends entirely on who’s in the room. If you’re running focus groups, the screening criteria and the incentive work together to determine the quality of the conversation you’ll have.
I’ve sat in on focus groups where the incentive was generous but the screening was poor, and the result was a two-hour session that produced almost nothing useful. The participants were articulate and engaged, but they weren’t the right people. The incentive had done its job of filling seats. The research design hadn’t done its job of filling the right seats.
Incentives in the Context of Broader Research Strategy
Incentives don’t exist in isolation. They’re one component of a research programme that also includes methodology selection, sample design, question design, analysis, and activation. Getting the incentive right while getting the rest wrong still produces bad research.
One thing I’ve observed over the years is that companies often invest in research as a way of validating decisions they’ve already made. The research is designed, consciously or not, to produce a particular answer. The incentive structure plays into this: if you recruit broadly and incentivise heavily, you get a large sample that tells you what you want to hear. If you recruit carefully and incentivise appropriately, you get a smaller sample that tells you what you need to hear.
This connects to a broader point about what research is actually for. The most valuable customer research I’ve been involved in was never the research that confirmed the strategy. It was the research that surfaced something uncomfortable, a product assumption that was wrong, a segment that was being ignored, a competitor move that hadn’t been noticed. That kind of insight only comes when the research design, including the incentive structure, is genuinely oriented toward finding the truth rather than confirming a narrative.
If you’re building out a broader intelligence function, the principles that apply to search engine marketing intelligence around honest data interpretation apply equally here. The goal is accurate signal, not comfortable signal.
It’s also worth noting that research incentive decisions don’t happen in a vacuum. They sit within a broader business strategy context. When I’ve worked with technology and consulting businesses on strategy alignment and SWOT analysis, the customer insight that informs those exercises is only as good as the research that generated it. Weak research produces weak strategy, and the incentive structure is one of the less glamorous but genuinely important variables in that chain.
Practical Principles for Getting Incentives Right
Here are the principles I’d apply to any research incentive decision, drawn from what I’ve seen work and what I’ve seen fail.
Match the incentive to the audience, not to your budget. What motivates a 25-year-old consumer is not what motivates a 50-year-old CFO. Design the incentive from the respondent’s perspective, not from the researcher’s convenience.
Treat the incentive as a signal, not just a payment. What you offer communicates something about how you value the participant’s time and expertise. A poorly calibrated incentive can actively damage the quality of your recruitment before the research even begins.
Test incentive structures before you scale. If you’re running a large programme, pilot the recruitment with two or three different incentive approaches and compare not just response rates but respondent quality. This is rarely done but almost always worth doing.
Don’t conflate response rate with research quality. A high response rate achieved through generous incentives can mask a fundamentally biased sample. Response rate is a metric of recruitment efficiency, not research validity.
Build in a quality filter at the screening stage. The incentive attracts people to the door. The screening process decides who comes in. These two things need to work together. If your incentive is attracting the wrong profile, no amount of screening will fully compensate for it, but tight screening can significantly improve the quality of what you get from a broadly incentivised sample.
Consider non-financial value for professional audiences. Insight reports, benchmarking data, early access to findings, and professional recognition are often more compelling to senior B2B audiences than cash equivalents. They also signal a more sophisticated research operation, which itself improves participation quality.
There’s a version of this that connects to a broader belief I hold about marketing and research generally. Companies that genuinely want to understand their customers, rather than just validate their assumptions about them, tend to build better products and better marketing. The incentive structure is a small but telling indicator of which camp a business is in. If you’re designing it to fill a sample quota, you’re probably in the wrong camp. If you’re designing it to attract the people whose honest opinion you most need to hear, you’re probably on the right track.
For more on building a research function that produces actionable intelligence rather than comfortable noise, the full market research and competitive intelligence hub covers methodology, tools, and strategic application in depth.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
