Media Market Research: What Most Planners Get Wrong
Media market research is the process of gathering and analysing data about audiences, channels, competitors, and consumption habits to inform where and how a brand should invest its media budget. Done well, it closes the gap between assumption and evidence before money leaves the building.
Most planners treat it as a box-ticking exercise. They pull a few audience reports, reference a syndicated study, and call it strategy. The result is a media plan that looks rigorous on paper but is built on the same assumptions the team walked in with.
That gap between the appearance of research and the substance of it is where most media budgets quietly bleed out.
Key Takeaways
- Media market research is only as useful as the decisions it changes. If your research confirms everything you already planned, it probably wasn’t research.
- Syndicated audience data tells you who could be reached. Behavioural and intent data tells you who is actually in-market. Conflating the two is a common and expensive mistake.
- Competitive media intelligence is not optional. Without it, you are optimising your spend in isolation from the market you are actually competing in.
- The best media research combines quantitative reach and frequency data with qualitative insight into why audiences behave the way they do on each channel.
- Research that is not connected to a clear commercial objective is a cost centre, not an asset. Every research brief should start with the business question it is designed to answer.
In This Article
- Why Most Media Research Fails Before It Starts
- The Difference Between Audience Data and Audience Intelligence
- Competitive Media Intelligence: The Dimension Most Plans Ignore
- Search Intelligence as a Media Research Signal
- Understanding the Informal Information Layer
- Connecting Media Research to ICP and Audience Definition
- Pain Point Research as a Media Planning Input
- Turning Research Into a Media Brief That Actually Guides Decisions
Why Most Media Research Fails Before It Starts
I have sat in a lot of media planning sessions over the years. The pattern is remarkably consistent. Someone presents a slide with audience demographics, another slide with channel reach figures, and a third with a competitor’s estimated spend. Then the team builds a plan that looks almost identical to the one they ran the previous year.
The research did not drive the plan. The plan was already decided, and the research was used to justify it.
This is not a data problem. It is a process problem. When research is commissioned after the strategic direction has been set, it will almost always be used selectively. The team reaches for data that supports the direction already chosen and ignores data that complicates it. That is not cynicism, it is just how confirmation bias works in practice.
Effective media market research starts with a genuine question, not a foregone conclusion. The question might be: which channels are our highest-value customers actually using when they are in a buying mindset? Or: where is our share of voice weakest relative to competitors? Or: what does our target audience actually think about the category we operate in?
If you cannot articulate the business question before the research begins, you will not know what to do with the findings when they arrive. Our broader market research coverage goes into this in more depth, but it applies with particular force to media planning, where the connection between insight and spend decision is direct and financially significant.
The Difference Between Audience Data and Audience Intelligence
Audience data is easy to come by. Syndicated tools, platform dashboards, and third-party panels will give you demographic breakdowns, estimated reach, and broad interest categories without much effort. Most media planners have access to more of this data than they can meaningfully use.
Audience intelligence is something different. It is the understanding of why a specific audience behaves the way it does on a specific channel, what triggers a response, what creates friction, and what the competitive context looks like from where they are sitting.
Early in my career, I made the mistake of treating reach figures as a proxy for relevance. If a channel could reach a large number of people who matched our target demographic, the logic went, it should be on the plan. What I learned, eventually, is that demographic overlap is a starting point, not a conclusion. A channel can reach the right people at entirely the wrong moment, in entirely the wrong mindset, and deliver almost nothing in return.
The shift from data to intelligence requires layering. You start with reach and frequency data. You add behavioural signals, search intent patterns, and purchase trigger research. Then you add qualitative context, the kind of insight you get from focus groups and qualitative research methods, which tells you what the numbers cannot. Why does this audience scroll past certain formats? What does trust look like in this category? When do they actually make a decision?
That layered picture is what separates a media plan with genuine strategic grounding from one that is just a well-formatted spreadsheet.
Competitive Media Intelligence: The Dimension Most Plans Ignore
One of the more uncomfortable truths about media planning is that your spend does not exist in isolation. You are competing for attention in the same spaces as every other brand targeting a similar audience. The effectiveness of your investment is partly a function of what everyone else is doing around it.
Despite this, competitive media intelligence is often treated as an afterthought. Teams will pull an estimated spend figure from a monitoring tool, note that a competitor is active on certain channels, and move on. That is not competitive intelligence. That is a cursory glance dressed up as analysis.
Proper competitive media research asks harder questions. Where are competitors investing relative to their apparent commercial priorities? Are they increasing spend in channels where you are pulling back? Are there channels where you have structural share of voice advantages that you are not exploiting? Are there gaps in the competitive landscape that represent a genuine opportunity, or are those gaps empty because the channel simply does not work for this category?
This kind of analysis sits at the intersection of media research and broader strategic planning. The SWOT-aligned approach to business strategy applies directly here: understanding your media position in competitive terms is as important as understanding it in audience terms. A channel where you have low share of voice against a well-funded competitor is a different proposition from the same channel where competitors are largely absent.
When I was running agency teams managing large paid media portfolios, we built competitive monitoring into the planning cycle as a standing agenda item, not an occasional exercise. The teams that did this consistently made better channel allocation decisions than those that relied on periodic snapshots.
Search Intelligence as a Media Research Signal
Paid and organic search data is one of the most underused inputs in media market research. Search behaviour is a direct signal of intent. When someone types a query into a search engine, they are telling you something precise about what they want, when they want it, and how they are framing their need.
For media planners, this matters in several ways. Search volume trends can tell you when category interest peaks and troughs across the year, which should inform how you weight spend across channels seasonally. Query structure can tell you where audiences are in the decision process: research phase, comparison phase, or ready to buy. Competitive search data can reveal which brands are investing heavily in paid search versus relying on organic presence, which tells you something about their confidence in their own brand equity.
I spent time at lastminute.com in the early days of paid search, when the channel was genuinely new and the mechanics were still being figured out. We ran a campaign for a music festival and saw six figures of revenue come through within roughly 24 hours from a relatively simple setup. What made it work was not sophisticated targeting. It was the fact that search intent data told us exactly who was looking, when they were looking, and what they needed to see to convert. The research was baked into the channel itself.
That lesson has stayed with me. Search engine marketing intelligence is not just a performance tool. It is a research tool that tells you things about audience behaviour that no survey panel can replicate, because it captures real intent in real time rather than self-reported preferences in a controlled setting.
Forrester’s research on technology implementation planning makes a related point about the cost of skipping foundational intelligence work. Poor planning at the research stage creates downstream problems that are far more expensive to fix than the research itself would have cost.
Understanding the Informal Information Layer
Formal research tools, syndicated data, platform analytics, commissioned studies, give you a structured picture of the media landscape. But there is another layer of intelligence that does not appear in any dashboard, and it is often more commercially useful.
This informal layer includes things like: what industry contacts are observing about channel performance that has not yet appeared in published benchmarks; what sales teams are hearing from prospects about where they encounter your brand versus competitors; what customer service teams are picking up about how customers found you and what almost made them choose someone else.
This is adjacent to what we cover in our piece on grey market research, which explores the value of intelligence that sits outside formal research structures. In media planning, this informal intelligence can surface channel opportunities and risks that formal tools consistently miss, particularly in fast-moving or niche categories where syndicated data lags reality by months.
The discipline is in knowing how to weight it. Anecdotal intelligence from three sales calls is not the same as a statistically strong audience study. But dismissing it entirely because it is not quantifiable is its own form of analytical laziness. The skill is triangulation: using informal signals to generate hypotheses, then testing those hypotheses with harder data before making significant budget commitments.
Connecting Media Research to ICP and Audience Definition
One of the most common structural weaknesses in media market research is that it is disconnected from the organisation’s actual customer definition. The media team is working from one audience model, the product team from another, and the sales team from a third. The result is media investment optimised for an audience that does not quite match the customers the business actually wants to acquire.
In B2B contexts, this problem is particularly acute. Programmatic and social targeting tools offer broad demographic and firmographic parameters, but they are a crude approximation of the nuanced customer profile that a well-constructed ICP represents. If you have done the work to build a rigorous ICP scoring framework, that framework should be informing your media targeting criteria directly, not sitting in a separate document that the media team has never read.
The alignment question matters for media research in both directions. Your ICP definition should shape which channels you research and how you evaluate them. And your media research findings, particularly around where high-value customers actually spend their time and attention, should feed back into refining your ICP. These are not separate workstreams. They are the same workstream approached from different angles.
BCG’s work on emerging market strategy makes a useful point about the cost of misaligned customer targeting. Getting the customer definition wrong at the strategy stage compounds through every subsequent investment decision. In media, that compounding effect is direct and measurable.
Pain Point Research as a Media Planning Input
Most media research focuses on reach, frequency, and competitive positioning. Fewer teams invest in understanding the specific pain points and decision triggers of their target audience at a channel-specific level. This is a missed opportunity.
The same customer can be in very different emotional and cognitive states depending on which channel they are using and when. Someone encountering your brand in a social feed during passive scrolling is not in the same mental state as someone who has typed a specific search query. The message that works in one context will often fall flat in the other, not because the creative is wrong, but because it is mismatched to the audience’s state of mind at that moment.
Pain point research for marketing services illustrates how this kind of audience psychology work should feed into channel strategy. Understanding not just what your audience wants but when and why they want it, and what friction stands between them and a decision, changes how you allocate spend across the funnel and across channels.
Early in my career, I once built a website from scratch because I could not get budget for an agency to do it. That experience taught me something useful: when you are forced to understand something from the ground up rather than delegating it, you develop a much sharper instinct for what actually matters. Pain point research works the same way. Teams that have genuinely immersed themselves in understanding customer friction make better media decisions than teams that have outsourced that understanding to a brief and a deck.
Turning Research Into a Media Brief That Actually Guides Decisions
The final failure mode in media market research is not in the research itself but in the handoff. Research findings get presented, noted, and then quietly ignored when the plan is being built. The channel mix looks the same as last year. The audience targeting is unchanged. The research has been processed but not integrated.
The discipline that prevents this is translating research findings into specific, falsifiable planning constraints before the media brief is written. Not “our audience is active on social media” but “our highest-value customers are disproportionately active on LinkedIn between 7am and 9am on weekdays, and our current plan does not reflect this.” Not “competitors are investing in video” but “our two main competitors have increased video spend by an estimated 40% in the past six months, and our share of voice in that format is currently negligible.”
Specificity is what connects research to decisions. Vague findings produce vague plans. The more precisely you can translate a research insight into a planning implication, the more likely it is to actually change the brief.
Copyblogger’s work on content effectiveness makes a related point about the gap between information and action. The most effective practitioners are not the ones with the most data. They are the ones who have developed a disciplined process for turning insight into specific, executable decisions. Media planning is no different.
Semrush’s analysis of how structural decisions affect search performance is another useful parallel. Technical choices that seem minor have compounding effects on outcomes over time. The same logic applies to media: the research inputs that shape your initial channel assumptions compound through every subsequent optimisation decision. Getting the foundation right matters more than most teams appreciate.
If you are building out a more comprehensive research practice around media and competitive intelligence, the full range of tools and approaches is covered across our market research hub, from primary research methods through to competitive monitoring frameworks.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
