Media Buying Strategy: Stop Optimising the Wrong Things

A media buying strategy is the structured approach a business uses to decide where, when, and how much to spend on paid advertising to reach the right audience at the right cost. Done well, it connects budget decisions directly to commercial outcomes. Done poorly, it produces a lot of activity that looks like progress but moves nothing.

Most teams have a media plan. Far fewer have a media buying strategy. The difference is not semantic. A plan tells you what you will buy. A strategy tells you why, and what you will do when it stops working.

Key Takeaways

  • Most media buying problems are not channel problems. They are objective-setting problems that surface later in the funnel.
  • Audience definition is the highest-leverage decision in any media plan. Channel selection is downstream of that.
  • Buying efficiency and buying effectiveness are not the same thing. Optimising for CPM or CPC without tying it to business outcomes is a common and expensive mistake.
  • The best media buyers are not the ones who know every platform. They are the ones who know when to stop spending on a channel that is not working.
  • Attribution models shape budget decisions more than most teams realise. Choosing the wrong model does not just misreport performance, it actively misdirects spend.

Why Most Media Buying Strategies Fail Before the First Ad Runs

The failure usually happens in the briefing stage, not the execution stage. A client wants reach. Or awareness. Or “to be where our audience is.” These are not objectives. They are directions without a destination.

I have sat in hundreds of briefings over two decades. The ones that produced strong campaigns almost always started with a commercial question: what does this spend need to do for the business, and how will we know if it worked? The ones that produced expensive noise started with a channel preference or a creative idea that needed a home.

When I was running iProspect, we grew the team from around 20 people to over 100 and moved the agency from loss-making to one of the top-five performance agencies in the UK. A large part of that came from being disciplined about what a brief actually needed to contain before we would recommend a channel mix. That discipline was not popular with every client. Some wanted validation, not strategy. But the ones who engaged with it properly got materially better results.

The commercial question has to come first. Everything else is implementation.

How Audience Definition Changes Everything Downstream

Channel selection is a downstream decision. The upstream decision is who you are trying to reach, what they already believe, and what would need to be true for them to act. If you get that right, the channel becomes more obvious. If you skip it, you end up choosing channels based on familiarity or trend, not fit.

This is where a lot of media buying goes wrong in practice. Teams default to the channels they know or the channels that are currently generating the most industry conversation. Social media ad spend has grown substantially over the past decade, but that growth does not mean social is the right channel for every objective. Early data on social ad spend already showed that investment was running ahead of measured effectiveness for many categories. That tension has not gone away.

Audience definition means more than demographics. It means understanding where someone is in their decision process, what signals they are giving off, and what kind of message would be relevant at that moment. A first-time buyer researching a considered purchase is not in the same place as a lapsed customer who bought eighteen months ago. Serving them the same creative through the same channel is not a media strategy. It is a media habit.

If you are working across paid channels more broadly, the wider paid advertising hub covers how different channel types fit together across the funnel, which is worth reading alongside any channel-specific planning work.

The Difference Between Buying Efficiency and Buying Effectiveness

This distinction matters more than most teams give it credit for. Buying efficiency is about cost: CPM, CPC, CPL. Buying effectiveness is about outcome: did the spend produce something of commercial value?

You can be highly efficient and completely ineffective. A campaign that delivers a very low CPM by targeting a broad, low-intent audience is efficient by the numbers and useless in practice. Conversely, a campaign with a higher CPM that reaches a tightly defined, high-intent audience and drives measurable pipeline is effective even if it looks expensive on a cost-per-impression basis.

I saw this play out at lastminute.com when we ran a paid search campaign for a music festival. The campaign was not complex. The targeting was tight, the intent signals were clear, and the offer was direct. We saw six figures of revenue within roughly a day. The efficiency metrics were not spectacular. The effectiveness was undeniable. That experience shaped how I think about performance benchmarks. The number that matters is the one tied to a business outcome, not the one that makes the dashboard look good.

AI tools are increasingly being used to improve campaign efficiency at scale, and platforms like Moz have explored how AI can support better Google Ads management. The risk is that AI optimisation defaults to efficiency metrics because they are easier to measure. Effectiveness requires human judgement about what the business actually needs.

How Attribution Models Silently Redirect Budget

Attribution is not just a measurement problem. It is a budget allocation problem. The model you choose determines which channels get credit, and credit determines where the next pound of budget goes. If your attribution model is wrong, your budget decisions are wrong, and they compound over time.

Last-click attribution is the most common default and the most misleading for anything with a considered purchase experience. It systematically over-credits the final touchpoint, which is usually a branded search or a retargeting ad, and under-credits the upper-funnel activity that created the demand in the first place. Teams running on last-click tend to cut awareness spend because it does not show up in the attribution report, then wonder why their lower-funnel performance degrades six months later.

First-click has the opposite problem. Linear attribution spreads credit evenly, which is statistically tidy but commercially naive. Time-decay models are more defensible for short purchase cycles. Data-driven attribution, where you have enough volume to make it statistically meaningful, is closer to useful, but it is still a model, not a mirror.

The honest approach is to triangulate. Run multiple attribution views in parallel, look for where they agree and where they diverge, and make budget decisions based on the pattern rather than any single model. It is slower and less satisfying than a clean dashboard, but it is more likely to be right. Analytics tools are a perspective on reality, not reality itself. Treating any attribution model as ground truth is one of the most common and costly mistakes in paid media management.

It is also worth noting that platform-level attribution has its own incentives. Google has made significant changes to its reporting infrastructure over the years, and each change has shaped how advertisers see their own performance. Platforms are not neutral measurement tools. They are also businesses with revenue targets.

When to Buy Direct and When to Use Programmatic

The programmatic versus direct debate has largely settled in favour of programmatic for scale and efficiency, but that does not mean direct buys are obsolete. The right answer depends on what you are buying and why.

Programmatic is strong when you need scale, audience targeting, and the ability to optimise in near-real-time. It works well for performance campaigns where you are chasing a defined conversion event and have enough volume to generate meaningful signals. The trade-off is brand safety, viewability, and the opacity of where your ads actually appear.

Direct buys make sense when context matters as much as audience. A premium editorial environment, a sponsorship integration, or a category-exclusive placement carries value that programmatic cannot replicate. I have seen clients dismiss direct buys because the CPM looks high relative to programmatic, then wonder why their brand perception metrics are not moving. Sometimes you are not just buying an impression. You are buying the association that comes with where that impression appears.

Influencer-driven paid media is a growing middle ground worth considering here. The lines between organic influencer content and paid media placement have blurred considerably, and platforms like Later have mapped out how influencer marketing and paid media intersect, including how to use creator content as paid amplification. Done with the right audience match and clear commercial intent, it can outperform traditional display at a fraction of the CPM. Done as a vanity exercise, it is expensive noise.

The Role of Testing in a Functioning Media Strategy

Most teams test. Fewer teams test with discipline. There is a difference between running A/B tests and having a testing framework that informs strategic decisions.

A testing framework starts with a hypothesis. Not “let’s see which creative performs better” but “we believe that message A will outperform message B for this audience segment because of X, and here is what we will measure to confirm or refute that.” The hypothesis forces clarity about what you are actually trying to learn, which makes the result useful regardless of which version wins.

When I judged the Effie Awards, I was struck by how many entries described testing as a process of iteration without being clear about what they had actually learned. Iteration is not the same as learning. You can run a hundred tests and learn almost nothing if each one is designed to confirm a preference rather than challenge an assumption.

The other testing discipline that matters is budget allocation. Holding a portion of budget, even a small one, for channel or format experimentation outside the core plan is how you discover what works before your competitors do. Most teams do not do this because short-term performance pressure makes experimental spend feel wasteful. That is a false economy. The channels that are working today were experiments for someone at some point.

The organic and paid relationship is worth testing too. Buffer’s analysis of organic versus paid social is a useful reference point for understanding how the two interact and where a hybrid approach can produce better outcomes than either in isolation.

Budget Allocation Across Channels: The Questions That Actually Matter

Budget allocation is where strategy becomes real. The questions most teams ask are: how much should we spend on each channel, and what is the right split between brand and performance? These are reasonable questions. They are not, however, the most important ones.

The more important questions are: what is the marginal return on the next pound in each channel, and where is spend generating diminishing returns that we have not yet acknowledged? Most media plans are built on historical allocation patterns with incremental adjustments, not on a fresh assessment of where the next unit of spend will do the most work.

Diminishing returns in paid search are real and often invisible. A campaign that performed strongly at a given budget level will not necessarily perform proportionally better at twice the budget. Reach extends into lower-intent queries. CPCs rise as you compete for incremental volume. The efficiency curve flattens and then bends downward. Recognising that inflection point and reallocating budget before it becomes a problem is one of the most valuable things a competent media strategist does.

The search landscape has also changed structurally over time. Historical decisions around search platform consolidation have shaped the competitive dynamics advertisers operate within today. Understanding that context matters when you are making long-term channel commitments.

For teams managing paid search alongside SEO, the relationship between the two channels has direct implications for budget allocation. The case for integrating SEO and PPC planning is well-made by Moz, and the budget implications of strong organic coverage reducing the need for paid coverage in certain query categories are genuinely significant at scale.

What Good Media Buying Governance Actually Looks Like

Strategy without governance is aspiration. The operational layer matters. Who has authority to reallocate budget mid-flight? What triggers a pause on a channel that is underperforming? How quickly can creative be swapped when a message is not landing? These are not administrative questions. They are competitive advantages.

In agency environments, I have seen campaigns run for weeks past the point where the data clearly indicated they should be paused or restructured, simply because the approval process required to make changes was too slow. By the time the decision was made, the wasted spend was already gone. The governance structure had made the strategy irrelevant.

Good governance means pre-agreed decision rules. If cost-per-acquisition exceeds X for five consecutive days, we pause and review. If a channel delivers less than Y% of projected volume in the first two weeks, we reduce budget and reallocate. These rules feel rigid in advance and valuable in practice. They remove the inertia that kills performance.

It also means honest reporting. Not reporting that makes the campaign look better than it is, but reporting that surfaces problems early enough to act on them. I have sat in too many review meetings where the deck was structured to manage the client’s mood rather than inform their decisions. That is a short-term relationship strategy and a long-term trust problem.

If you are building or refining your broader paid advertising approach, the paid advertising section of The Marketing Juice covers channel strategy, measurement, and budget planning in more depth across the full spectrum of paid channels.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a media buying strategy?
A media buying strategy is a structured framework for deciding where, when, and how much to spend on paid advertising. It connects budget decisions to commercial objectives and defines how performance will be measured and acted on. It is distinct from a media plan, which describes what will be bought, rather than why and under what conditions decisions will change.
How do you allocate budget across media channels?
Budget allocation should be based on the marginal return available in each channel relative to the current spend level, not on historical splits or industry benchmarks. Start with the commercial objective, define which channels are most likely to reach the right audience at the right stage of the decision process, and build in regular reallocation triggers based on performance data rather than waiting for end-of-quarter reviews.
What is the difference between programmatic and direct media buying?
Programmatic buying uses automated technology to purchase ad inventory at scale, typically through real-time bidding, with strong audience targeting capabilities and near-real-time optimisation. Direct buying involves negotiating placements directly with publishers, which offers greater control over context, placement, and brand association. Programmatic suits performance campaigns at scale. Direct suits situations where editorial context or category exclusivity carries commercial value.
Why does attribution matter for media buying decisions?
Attribution models determine which channels receive credit for conversions, and credit determines where future budget is directed. A flawed attribution model does not just misreport performance, it actively misdirects spend over time. Last-click attribution, for example, systematically over-rewards lower-funnel touchpoints and starves upper-funnel activity of budget, which can degrade overall performance over a medium-term horizon.
How do you know when a media channel is not working?
A channel is not working when it consistently fails to deliver against the commercial objective it was assigned, after sufficient time and budget to generate meaningful data. The key word is consistently. Short-term fluctuations are normal. Sustained underperformance against pre-agreed benchmarks is a signal to pause, restructure, or reallocate. Pre-defining those benchmarks before the campaign launches is what makes the decision objective rather than political.

Similar Posts