Advertising Market Research: What You’re Measuring vs. What You Think You’re Measuring

Advertising market research is the process of gathering and analysing information about audiences, competitors, and market conditions to inform how, where, and to whom you advertise. Done well, it closes the gap between what you assume about your customers and what is actually true. Done poorly, it gives you the confidence to be wrong at scale.

Most teams treat research as a box to tick before a campaign goes live. The briefs that land on creative desks are often built on demographic assumptions, last year’s data, and a few anecdotes from the sales team. The research that would actually change decisions tends to get skipped because it feels slow, expensive, or uncomfortable when it contradicts the plan already in motion.

Key Takeaways

  • Advertising research that doesn’t connect to a specific campaign decision is largely a waste of time and budget.
  • Audience assumptions built on internal data alone are almost always incomplete, because your CRM only shows you the customers you already have.
  • The most actionable research often comes from sources teams overlook: search behaviour, competitor creative, and unmoderated customer language.
  • Qualitative and quantitative methods answer different questions. Mixing them up produces neither insight nor confidence.
  • Research findings that challenge the existing plan are the most valuable ones. If your research only confirms what you already believed, you probably designed it that way.

I’ve spent over 20 years in marketing and agency leadership, managing ad spend across more than 30 industries. I’ve watched brands commission six-figure research projects and then run campaigns that ignored every finding. I’ve also seen a single afternoon of search data analysis change a media strategy more decisively than a month of focus groups. The quality of the research matters far less than whether it’s connected to a real decision.

What Is Advertising Market Research Actually For?

There’s a version of advertising research that exists to make plans feel more legitimate. Teams commission surveys, compile competitor audits, and produce thick slide decks that get presented once and then filed. That kind of research is about confidence, not insight. It’s designed to reduce internal anxiety about a decision that was probably already made.

Useful advertising research does something different. It answers a specific question that, if answered differently, would change what you do. That question might be: which audience segment has the highest untapped demand? Which message is most likely to move someone from consideration to purchase? Which channels are our competitors underusing? What does our target customer actually say when they describe their problem?

If you can’t name the decision your research will inform, you’re not doing research. You’re doing documentation. Our broader market research hub covers the full range of methods and frameworks, but this article focuses specifically on the research that shapes advertising strategy, creative, and targeting.

The Audience Problem Most Advertisers Don’t See

I’ve reviewed a lot of audience briefs over the years. The majority describe the customers a brand already has, not the customers it wants to reach. There’s a structural reason for this: most audience data comes from CRM systems, analytics platforms, and past campaign performance. All of those sources are biased toward people who already converted.

That’s a meaningful limitation. If you’re trying to grow, you need to understand the people who didn’t buy, not just the ones who did. Why did they consider your category and choose someone else? What language do they use to describe their problem? What objections did your advertising fail to overcome?

This is where pain point research becomes genuinely useful in an advertising context. When you understand the specific friction points your audience experiences before they buy, you can build campaigns around those moments rather than around the product features your team is most proud of. Those two things are rarely the same.

For B2B advertisers in particular, the audience definition problem runs even deeper. Buying committees are multi-stakeholder, and the person who clicks your ad is often not the person who signs the contract. If you’re building a B2B advertising strategy, understanding your ideal customer profile with precision matters more than most teams realise. A structured ICP scoring approach can bring real clarity to who you’re actually trying to reach before you spend a pound on media.

Where Search Behaviour Tells You What Surveys Won’t

Early in my career, I was working on paid search campaigns at lastminute.com. We launched a campaign for a music festival with what was, by today’s standards, a fairly simple setup. What struck me wasn’t the result, though watching six figures of revenue come in within a day from a modest campaign was memorable. It was how clearly the search data showed us what people actually wanted. Not what we thought they wanted, not what the brief said they wanted. What they typed when they thought nobody was watching.

Search data remains one of the most underused research tools in advertising. It shows you demand that already exists, in the language customers use naturally, at the moment they’re actively looking. No survey can replicate that. No focus group gets close.

For advertising research specifically, search intelligence tells you which problems people are actively trying to solve, which competitors they’re evaluating alongside you, and which terms are generating commercial intent versus casual curiosity. Search engine marketing intelligence built properly gives you a real-time picture of category demand that most audience research methods take months to approximate.

Tools like SEMrush have started incorporating AI-driven forecasting into their keyword and traffic data, which adds another layer of forward-looking insight to what was already a strong research source. Predictive traffic modelling is still imperfect, but it’s a useful complement to historical search volume when you’re planning advertising campaigns six to twelve months out.

Competitive Research as an Advertising Input

Most competitive research in advertising stops at “what are they spending and where.” That’s useful context, but it’s a surface reading. The more interesting question is what they’re saying and whether it’s working.

Competitor creative analysis tells you which messages are being tested, which audiences are being targeted, and where gaps exist in category positioning. If every competitor in your space is advertising on the same rational benefits, that’s both a risk and an opportunity. It’s a risk because you’ll disappear into the noise. It’s an opportunity because emotional or values-based differentiation becomes available to whoever takes it first.

There’s also a less obvious competitive research layer worth exploring: the grey areas of market intelligence. Grey market research covers the informal, semi-public signals that don’t show up in standard competitive audits but often reveal more about where a competitor is heading than their published materials do. Job postings, patent filings, agency roster changes, and founder interviews are all legitimate research inputs that most advertising teams ignore entirely.

BCG’s work on advanced analytics in operational decision-making makes a point that translates well to advertising research: the teams that win aren’t necessarily the ones with the most data, they’re the ones who build systems to act on it faster than their competitors. Analytical advantage in competitive markets comes from speed of insight, not volume of data.

Qualitative Research and What It’s Actually Good For

I’ve sat through focus groups that produced genuinely useful creative direction. I’ve also sat through focus groups that produced nothing except a two-hour transcript of people being polite to each other and a bill for several thousand pounds. The difference is almost always in how the research was designed, not who was in the room.

Qualitative methods are the right tool for understanding language, emotion, and context. They’re the wrong tool for measuring anything. If you’re trying to find out how many people prefer version A over version B, a focus group will mislead you. If you’re trying to understand why someone hesitates before buying a product in your category, a well-run qualitative session can give you something no survey ever will.

The mechanics of running qualitative research well are worth taking seriously. Focus group methodology has been refined considerably over the decades, and the move toward online and asynchronous formats has made it more accessible without necessarily making it better. The quality of the moderator and the quality of the discussion guide matter more than the platform or the sample size.

For advertising specifically, I’ve found the most useful qualitative work happens before a campaign brief is written, not after creative is already in development. By the time you’re testing finished ads in a focus group, you’re often just rationalising work that’s already been approved. The genuinely useful window is earlier: understanding how your audience frames the problem your product solves, in their own language, before a single line of copy has been written.

Connecting Research to Media and Channel Decisions

One of the more persistent failures I’ve seen in advertising planning is treating audience research and media planning as separate workstreams. Research tells you who you’re talking to and what matters to them. Media planning tells you where to reach them. Those two things should be in constant conversation, and they rarely are.

Channel selection without research backing is essentially intuition dressed up as strategy. And intuition, in media planning, tends to skew toward whatever the team is most comfortable with or whatever the previous campaign used. That’s how brands end up spending disproportionately on channels their audience has largely moved away from, or missing channels where their competitors haven’t yet established a presence.

Platform behaviour data is genuinely useful here, provided you read it critically. Aggregate statistics about where audiences spend time are a starting point, not a conclusion. The fact that a platform has a large user base doesn’t mean your specific audience is reachable there at a cost that makes commercial sense. Platform usage data tells you about reach potential. It doesn’t tell you about audience quality, ad environment, or competitive density in your category.

The same critical lens applies to emerging formats. Short-form video has captured a huge share of attention and creative investment. Whether it’s the right channel for a specific advertiser depends on the audience, the message, and the commercial objective, not on whether it’s growing. Short-form content strategy has real merit in the right context. The mistake is adopting it because it’s current rather than because your research says your audience is there and receptive.

The Measurement Problem in Advertising Research

There’s a version of advertising research that happens after campaigns run, and it’s arguably where the most useful learning occurs. Attribution modelling, brand lift studies, and post-campaign analysis can tell you things about what actually worked that no amount of pre-campaign research can predict.

The problem is that most teams treat post-campaign analysis as a reporting exercise rather than a research exercise. The question becomes “did we hit our targets?” rather than “what did we learn about our audience and our messaging that should change what we do next?” Those are very different questions, and only one of them improves the next campaign.

I’ve judged the Effie Awards, which are specifically about advertising effectiveness rather than creative execution. What separates the entries that win from the ones that don’t is almost never the quality of the creative. It’s the quality of the thinking that preceded it: the clarity of the audience insight, the sharpness of the strategic problem being solved, and the discipline of connecting campaign activity to business outcomes. Research is the foundation of all of that.

For technology-led businesses in particular, connecting advertising research to broader strategic planning is worth formalising. Strategy alignment frameworks that incorporate market research as a structured input, rather than an ad hoc exercise, tend to produce more consistent advertising performance because the research is connected to decisions that actually get made.

Building a Research Process That Actually Gets Used

When I was growing an agency from around 20 people to over 100, one of the structural challenges was making sure that research insights actually reached the people making creative and media decisions. The researchers were doing good work. The planners were making good plans. But the two groups weren’t talking to each other in a way that changed outputs.

The fix wasn’t a new process or a new tool. It was a simple rule: no campaign brief gets signed off without a one-page research summary that answers three questions. Who are we talking to and what do they actually care about? What do we know about the competitive landscape in this category? What is the one audience insight this campaign is built on? If you can’t answer those three questions with evidence, you don’t have a brief. You have an assumption document.

That kind of structural discipline is harder to maintain than it sounds. There’s always a deadline, always a client who wants to move faster, always a creative team that’s ready to go. Research feels like friction. But the campaigns that consistently perform well are almost always the ones where the audience insight is specific and defensible, not generic and assumed.

There’s a broader point here about how marketing teams think about research investment. The question isn’t whether you can afford to do research before a campaign. It’s whether you can afford to run campaigns without it. At the budget levels most serious advertisers operate at, the cost of a well-designed research process is negligible compared to the cost of a campaign built on the wrong insight.

If you’re building out your team’s research capability more broadly, the full range of methods, tools, and frameworks is covered in our market research resource hub, which goes deeper on both qualitative and quantitative approaches across different business contexts.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between advertising market research and general market research?
Advertising market research focuses specifically on the inputs needed to plan, create, and evaluate advertising campaigns. This includes audience insight, message testing, competitive creative analysis, and channel behaviour data. General market research is broader and may cover pricing, product development, distribution, or brand perception. The distinction matters because advertising research should be tied to specific campaign decisions, not general business intelligence.
How much should a business spend on advertising research?
There’s no universal figure, but a reasonable starting point is treating research as a percentage of total campaign budget rather than a fixed cost. For larger campaigns, even a small allocation to pre-campaign audience research and post-campaign analysis tends to improve performance enough to justify the spend. Many teams find that free or low-cost sources, including search data, social listening, and competitor creative analysis, deliver strong insight without significant budget.
Can small businesses do meaningful advertising research without a dedicated research team?
Yes. The most valuable advertising research is often the simplest: talking to customers directly, analysing search queries in your category, reviewing competitor advertising, and reading customer reviews in your market. None of those require specialist skills or significant budget. The discipline required is in connecting what you find to a specific decision, rather than treating research as background reading.
What is the biggest mistake brands make in advertising research?
Designing research to confirm a decision that has already been made. This happens more often than most teams would admit. When the creative brief is already written or the media plan is already approved, research tends to be used selectively to support the existing direction rather than to genuinely test it. The most useful research is done before the plan is fixed, when findings can still change what you do.
How do you measure whether advertising research actually improved campaign performance?
The cleanest way is to compare campaigns built on documented audience insight against campaigns built on assumptions, across similar budgets and time periods. In practice, most teams don’t track this systematically. A simpler proxy is to ask, after a campaign, whether the audience behaved as the research predicted. If there’s a consistent gap between what research suggested and what campaigns delivered, that’s a signal the research methodology needs revisiting, not that research itself is unreliable.

Similar Posts