DIY Market Research That Tells You Something Useful

DIY market research is the practice of gathering competitive, customer, and market intelligence using your own team and tools, without commissioning a formal research agency. Done well, it gives you faster, cheaper, and often more commercially relevant insights than a £30,000 research brief ever would.

The catch is that most teams do it badly. They confuse activity with insight, collect data they never act on, and mistake their own assumptions for validated findings. This article is about doing it properly.

Key Takeaways

  • DIY market research is not a cheaper version of agency research. It is a different discipline with different strengths, and the best marketers treat it that way.
  • The most valuable customer insights often come from sources you already have access to: support tickets, sales call recordings, churn interviews, and search query data.
  • Confirmation bias is the single biggest threat to DIY research. Build in deliberate friction to challenge your working assumptions before you start collecting data.
  • Search behaviour is one of the most honest signals in marketing. What people type into Google at 11pm tells you more about their real problems than any survey response.
  • Research without a decision attached to it is just reading. Every research project should start with the question: what will we do differently depending on what we find?

Early in my career, around 2000, I asked the MD of the agency I was working at for budget to build a new website. The answer was no. So I taught myself to code and built it myself. It took longer, it was imperfect, and it taught me something I have carried ever since: constraints force you to understand the problem more deeply than a budget ever would. DIY market research works the same way. When you cannot outsource the thinking, you get closer to the actual question.

What Makes DIY Market Research Different From Commissioned Research?

Commissioned research agencies bring rigour, sample size, and methodological defensibility. They are genuinely useful for certain decisions: pricing strategy, new market entry, brand tracking at scale. But they are slow, expensive, and the outputs are often written for the person who signed the brief rather than the person who needs to act on it.

DIY research trades some of that rigour for speed, specificity, and commercial proximity. You are closer to the business context. You know which assumptions are actually contested inside the organisation. You can pivot the research question mid-stream when you find something unexpected. And you can turn findings into decisions in days rather than months.

The wider landscape of market research methods and frameworks covers the full range of approaches available to marketing teams, from formal commissioned studies through to the kind of fast, iterative intelligence gathering this article focuses on. Worth reading if you are trying to decide which approach fits your current situation.

The limitation of DIY research is not the tools or the budget. It is the bias. When you are close to a business, you tend to find what you are looking for. That is not a character flaw. It is how human cognition works. The discipline of good DIY research is largely the discipline of building structures that make it harder to fool yourself.

Where Do You Start? Define the Decision First

Most DIY research projects fail before they begin because the team starts with a method rather than a question. They decide they want to run a survey, or do some competitor analysis, before they have defined what decision the research is supposed to inform.

The right starting point is always the decision. Write it down in a single sentence: “We need to decide whether to expand into the SME segment or focus exclusively on enterprise.” Or: “We need to decide whether to reposition our pricing or our messaging.” Everything that follows, the methods you choose, the questions you ask, the data you collect, should be in service of that decision.

If you cannot write the decision down in one sentence, you are not ready to start the research. You are still in the scoping phase, and that is fine. Spend another day on the question before you spend a week on the data.

Once the decision is clear, write down your current working assumptions. What do you believe to be true right now? What would change your mind? These questions are not rhetorical. If you cannot articulate what evidence would change your position, you are not doing research. You are building a case.

The Best DIY Research Sources Most Teams Ignore

Most teams reach for surveys first. Surveys have their place, but they are one of the weaker DIY research tools because they measure what people say rather than what people do, and they are highly sensitive to how questions are framed. Before you build a survey, consider whether you have already answered your question through sources you are not currently using.

Customer support tickets and live chat transcripts. These are a direct window into the language your customers use to describe their problems. Not the language your product team uses, not the language your sales deck uses. The actual words people reach for when something is not working. If you have a few hundred of these, you have the basis for a messaging audit and a product roadmap.

Sales call recordings. If your sales team uses any kind of call recording tool, you have an archive of unfiltered customer conversations. The objections, the hesitations, the competitor mentions, the questions that keep coming up. I have sat in on sales call reviews at agencies and found more useful positioning intelligence in two hours of recordings than in a full brand research report.

Churn interviews. The customers who left are often more honest than the ones who stayed. A 20-minute conversation with someone who cancelled their contract or switched to a competitor will tell you things your NPS score never will. Most teams do not do this because it is uncomfortable. That discomfort is exactly why it is valuable.

Search query data. What people type into search engines is one of the most honest behavioural signals available to marketers. It is unfiltered, it is at scale, and it represents real intent rather than survey responses. Your own site search data, Google Search Console queries, and keyword research tools all give you access to this. I will come back to this in more detail shortly.

Review platforms. G2, Trustpilot, Capterra, Google Reviews. Your competitors’ reviews are publicly available and represent a structured dataset of customer sentiment. Read the three-star reviews especially. The five-star reviews tell you what people want to believe. The one-star reviews are often outliers. The three-star reviews tell you what is genuinely missing.

How to Use Search Intelligence as a Research Tool

When I was at lastminute.com, we launched a paid search campaign for a music festival and saw six figures of revenue within roughly a day from a relatively simple campaign. What made it work was not the creative or the budget. It was the alignment between what people were already searching for and what we were offering. Search data told us the demand was there before we spent a penny.

That principle applies directly to DIY research. Search behaviour is demand made visible. When someone types a query into Google, they are telling you what they need, in their own words, at the moment they need it. That is extraordinarily valuable intelligence, and most of it is freely accessible.

Google Search Console gives you the actual queries driving traffic to your site, including the ones you are not ranking well for. That gap between what people are searching for and what you are ranking for is a research finding in itself. It tells you where your content or your product positioning has a blind spot.

For broader market intelligence, search engine marketing intelligence covers how to extract competitive and demand signals from paid and organic search data. It is a more systematic treatment of the same principle: search behaviour as a proxy for market intent.

Tools like Google Keyword Planner, Semrush, and Ahrefs give you volume and trend data on specific queries. But the more interesting analysis is qualitative. Look at the language patterns. Are people searching for “how to” solutions or “best” comparisons? Are they searching for your category or for the problem your category solves? That distinction tells you something about where they are in their thinking, and where your messaging needs to meet them.

Running Customer Surveys That Are Worth the Effort

If you have exhausted your existing data sources and still need primary research, surveys can be useful. But most DIY surveys are poorly constructed and produce data that is either obvious or misleading. A few principles that make a real difference.

Keep it short. A survey with more than ten questions will see significant drop-off, and the responses you get from people who complete a 30-question survey are not representative of your broader audience. They represent the subset of your audience with strong enough opinions to persevere. That is a biased sample.

Ask about behaviour, not opinion. “How important is price to your purchasing decision?” is a weak question because people systematically underreport price sensitivity. “When you last chose between two similar products, what tipped the decision?” is a stronger question because it anchors to a specific past behaviour.

Include at least one open text field. The structured questions give you data you can aggregate. The open text field gives you language you can use. Some of the best copy I have seen written for B2B SaaS products came directly from customer survey responses. People describe their own problems better than any copywriter can.

If you are doing this in a B2B context and trying to understand your ideal customer profile, the process of defining and scoring your ICP should run in parallel with your customer research. The two inform each other: your research tells you what your best customers look like, and your ICP definition tells you which research respondents to weight most heavily.

Competitive Research Without a Budget

Competitive intelligence does not require expensive tools or a research agency. It requires systematic attention and a willingness to look at sources most teams treat as background noise.

Start with what is publicly available. Your competitors’ websites, job postings, press releases, LinkedIn activity, and pricing pages tell you a great deal about their strategic priorities. Job postings in particular are underused. If a competitor is hiring aggressively in enterprise sales and building out a professional services team, that tells you something about where they are taking the product. If they are hiring data scientists, they are building something technical. Job postings are a forward-looking signal, not a lagging one.

There is also a category of research that sits outside the traditional primary and secondary distinction. Grey market research covers the kind of intelligence that lives in semi-public spaces: industry forums, niche communities, regulatory filings, and other sources that are technically accessible but rarely systematically mined. For competitive intelligence purposes, this is often where the most interesting signals live.

Beyond the public record, talk to people. Former employees of competitors are often willing to have an informal conversation. Industry analysts, consultants, and journalists who cover your sector have a broader view than you do. These conversations are not a substitute for data, but they provide context that data alone cannot give you.

If you want a structured framework for situating your competitive intelligence within a broader strategic picture, SWOT analysis aligned to business strategy is a useful starting point. The risk with SWOT is that it becomes a box-ticking exercise. The value is in the honest conversations it forces, particularly around weaknesses and threats that the organisation would prefer not to examine.

Qualitative Methods: When to Use Them and How to Run Them

Qualitative research, interviews, focus groups, ethnographic observation, gives you depth that quantitative methods cannot. It tells you why, not just what. For DIY purposes, one-to-one interviews are almost always more useful than focus groups because they eliminate social desirability effects and give each participant space to develop their thinking without being anchored by what others say.

That said, there are contexts where group dynamics are themselves the research object. If you are trying to understand how a purchasing committee makes a decision, or how a product is discussed within a team, a structured group conversation can reveal things individual interviews miss. The focus group as a research method has specific strengths and specific failure modes, and understanding both will save you from drawing the wrong conclusions from the format.

For DIY interviews, the most important skill is knowing when to stop talking. The instinct when a participant goes quiet or gives a vague answer is to fill the silence or rephrase the question. Resist it. The pause before someone gives their real answer is often the most valuable moment in the interview. Let it breathe.

Record everything with permission and transcribe it, or use a tool that does both. Pattern recognition across ten interviews is much easier when you are working from text than from memory or shorthand notes. You are looking for the phrases that recur, the hesitations that cluster around specific topics, and the moments where the participant corrects themselves or qualifies a statement. Those are the moments that contain the actual insight.

Understanding Pain Points Without Asking Directly

One of the more reliable findings from years of running research across different sectors is that people are not always able to articulate their pain points directly. They describe symptoms rather than causes. They frame problems in terms of the workarounds they have built rather than the underlying need. And they often understate problems that feel embarrassing or that they have normalised.

This is why indirect research methods often outperform direct questioning. Watching someone try to complete a task tells you more about the friction in your product than asking them to rate their satisfaction. Listening to how they describe the problem to a colleague tells you more about the real pain than asking them to fill in a form.

For marketing services businesses specifically, pain point research for marketing services covers the specific dynamics of researching buyers in a sector where the buyer is also a sophisticated marketer. That adds a layer of complexity: your respondents know what you are doing and may perform accordingly. The methods that work in that context are somewhat different from standard B2B research.

More broadly, the principle is to triangulate. Do not rely on any single research method or data source. If your survey data, your support ticket analysis, and your sales call recordings all point to the same pain point, you have a finding. If only one source points to it, you have a hypothesis.

Turning Research Into Decisions

I have seen a lot of research projects produce a lot of slides that nobody acts on. The research gets presented, people nod, it gets filed, and the organisation carries on doing what it was already doing. This is not a research problem. It is a process problem.

The way to avoid it is to connect the research output directly to a decision point before you start. When you define the research question, also define the decision it will inform and who is accountable for making that decision. The research findings should land on the desk of the person with the authority and the obligation to act on them, not in a shared folder that everyone has access to and nobody reads.

Present findings as implications, not just data. “Forty percent of respondents said price was their primary concern” is a data point. “Our pricing is creating a barrier at the evaluation stage that our messaging is not currently addressing” is an implication. The second version is actionable. The first version is just a number.

And be honest about confidence levels. DIY research with a sample of 40 customers is not statistically representative of your entire market. That does not make it worthless, but it means you should treat the findings as directional rather than definitive. The appropriate response to directional research is a decision to act and monitor, not a decision to act and assume you are right.

For a broader view of the research methods and frameworks that sit alongside the DIY approaches covered here, the market research hub brings together the full range of intelligence-gathering disciplines, from competitive analysis to customer segmentation to search-based demand research. It is a useful reference point as you decide which methods fit your current questions.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is DIY market research?
DIY market research is the process of gathering customer, competitor, and market intelligence using your own team and tools, without commissioning a formal research agency. It typically draws on sources like customer interviews, survey tools, search data, review platforms, and competitive monitoring. The advantage is speed and commercial proximity. The risk is confirmation bias, which requires deliberate structures to manage.
How do I run a customer survey that produces useful results?
Keep it short, ideally under ten questions. Ask about specific past behaviours rather than general opinions or preferences. Include at least one open text field so respondents can answer in their own language. Avoid leading questions and double-barrelled questions. Most importantly, define what decision the survey is informing before you write a single question. Surveys built around a specific decision produce usable data. Surveys built around general curiosity usually produce noise.
What is the biggest mistake teams make with DIY market research?
Confirmation bias is the most common and the most damaging failure mode. Teams design research to validate what they already believe rather than to genuinely test their assumptions. The structural fix is to write down your working assumptions before you start collecting data, then explicitly ask: what would have to be true for each of these assumptions to be wrong? If you cannot answer that question, your research is not designed to find the truth. It is designed to find agreement.
How can I do competitive research without expensive tools?
Start with what is publicly available: competitor websites, pricing pages, job postings, LinkedIn activity, press releases, and review platforms like G2 or Trustpilot. Job postings are particularly underused as a forward-looking signal. Beyond the public record, talk to people with industry context: analysts, journalists, and professionals who move between companies. Combine these sources systematically rather than relying on any single one. The pattern across multiple sources is more reliable than any individual data point.
When should I use qualitative research methods instead of surveys?
Use qualitative methods when you need to understand why, not just what. Surveys tell you how many people hold a view. Interviews tell you how they arrived at it and what they mean by it. Qualitative research is particularly valuable early in a research process when you are still forming your hypotheses, when you are trying to understand a behaviour that is hard to capture in a structured question, or when your survey data is producing answers that do not quite make sense and you need to understand the reasoning behind them.

Similar Posts