Advertising Works. You Just Measured the Wrong Thing

Advertising works. The question is whether the advertising you’re running is doing the working, or whether the sale was already going to happen anyway. That distinction matters more than most marketing teams want to admit, and it’s the one that tends to get buried under dashboards and attribution reports that are designed to reassure rather than inform.

Most performance marketing doesn’t create demand. It captures it. The person who clicked your retargeting ad at 11pm on a Tuesday had probably already decided to buy. You paid to be in the room when they raised their hand. That’s not nothing, but it’s also not growth.

Key Takeaways

  • Most performance marketing captures existing demand rather than creating new demand, which means it rewards brands that are already winning.
  • Attribution models are designed to allocate credit, not to explain causality. Treating them as proof that advertising worked is a category error.
  • Reaching new audiences who have never considered your brand is where advertising creates genuine incremental growth.
  • The brands that pull back on brand advertising during downturns are often the same ones who struggle to explain why performance costs rise the following year.
  • Honest measurement means distinguishing between sales you caused and sales you witnessed. Most reporting conflates the two.

The Attribution Trap Nobody Talks About

I’ve spent a significant part of my career looking at attribution reports. Hundreds of them, across dozens of clients, spanning industries from financial services to fast-moving consumer goods. And the thing that struck me early on, and never stopped striking me, is how good these reports are at distributing credit and how poor they are at explaining what actually caused a sale.

Attribution is a model. It’s a set of rules that decides which touchpoints get the credit when a conversion happens. Last-click, first-click, linear, data-driven, whatever the flavour, they all share the same fundamental limitation: they measure correlation across a experience that already ended in a purchase. They cannot tell you what would have happened without that touchpoint. That’s not a technical limitation waiting to be solved. It’s a logical one.

When I was running iProspect, we grew the agency from around 20 people to over 100 and moved it from loss-making to one of the top five performance agencies in the market. A lot of that growth came from helping clients spend performance budgets more intelligently. But the honest version of that story includes the clients who came to us with attribution reports showing every channel was working beautifully, and a business that wasn’t growing. The numbers looked fine. The business wasn’t fine. Those two things should not be able to coexist, and yet they did, repeatedly.

The reason they coexist is that attribution reports measure activity within a closed system. They account for the people who were already in the funnel. They don’t account for the people who never entered it.

What Performance Marketing Is Actually Good At

This is not an argument against performance marketing. It’s an argument for being precise about what it does.

Performance marketing is very good at being present when someone is ready to buy. Paid search captures people who are already looking. Retargeting recaptures people who already showed interest. Shopping ads intercept people who are already comparing. These are all genuinely useful things to do. If you’re not doing them, you’re leaving conversions on the table.

But here’s the thing that took me longer than it should have to fully internalise: the size of that pool of ready-to-buy people is not determined by your performance marketing. It’s determined by everything else. Your brand, your reputation, your product, your category presence, your word of mouth, and yes, your brand advertising. Performance marketing harvests a crop it didn’t plant.

Think about a clothes shop. Someone who picks up a garment and tries it on is dramatically more likely to buy than someone who walks past the rail. But the shop didn’t create that intent by having a good fitting room. The fitting room is just the moment of capture. The intent was built somewhere upstream, by the window display, by what a friend said, by a memory of the brand from years ago. The fitting room gets the sale. It didn’t earn it.

Performance marketing is a very efficient fitting room. It is not a window display. If you stop investing in the window display and just optimise the fitting room, your numbers look fine right up until the point when they don’t.

Why Brand Advertising Gets Cut First

There’s a structural reason why brand budgets get cut before performance budgets when times get tight, and it’s not because CFOs are irrational. It’s because performance marketing produces numbers that look like proof, and brand advertising produces numbers that look like estimates.

When you’re presenting to a board and you need to justify spend, a chart showing cost-per-acquisition across your paid search campaigns is a much easier conversation than an explanation of how your brand awareness investment is building future demand. One has a denominator. The other requires a theory of how markets work.

I’ve had that conversation from both sides of the table. As an agency leader, I’ve made the case for brand investment to clients who wanted to cut it. As someone who’s managed P&Ls, I understand the pressure to show the number that’s in the spreadsheet. The tension is real. But the resolution that most businesses land on, which is to treat brand investment as discretionary and performance as essential, gets the logic backwards.

Performance is only efficient when there’s sufficient brand awareness to generate demand. The moment you hollow out your brand presence, your performance costs start to rise because you’re competing harder for a shrinking pool of people who already know you. The lag between cause and effect is long enough that most businesses don’t connect the dots until the damage is done.

BCG’s work on go-to-market strategy and financial services makes a related point about the cost of trying to recapture audiences you’ve lost touch with. It’s always more expensive to rebuild familiarity than to maintain it. That principle holds well beyond financial services.

The Incrementality Question

The right question to ask of any advertising investment is not “did sales go up while this was running?” It’s “did this advertising cause sales that would not have happened otherwise?”

That’s the incrementality question, and it’s much harder to answer than most reporting frameworks are designed to handle. Incrementality testing, holdout groups, geo-based experiments, these are the tools that get you closer to a real answer. They’re also more expensive and more operationally complex than running a standard attribution report. Which is why most businesses don’t do them, and why most advertising effectiveness conversations are built on shakier foundations than the people having them realise.

When I judged the Effie Awards, the entries that impressed me most were the ones that could show genuine business movement, not just channel metrics. The ones that showed how a campaign reached people who had never considered the brand before, and what happened to those people over time. That’s a much harder story to tell than “our CPA was 23% below target,” but it’s the story that actually explains whether advertising worked.

The Effies exist precisely because the industry recognised that standard metrics don’t capture effectiveness. You don’t win an Effie for having a good click-through rate. You win it for demonstrating that your marketing changed business outcomes in a way that can be traced back to a specific strategic decision. That bar is high. Most campaigns don’t clear it, not because the advertising was bad, but because the measurement wasn’t designed to find the answer.

If you’re thinking about how advertising fits into a broader growth framework, the Go-To-Market and Growth Strategy hub covers the strategic context in more depth, including how demand creation and demand capture need to work together across the funnel.

Reaching People Who Don’t Know You Yet

Growth, real growth, not optimisation of existing demand, requires reaching people who have never considered your brand. That’s an uncomfortable truth for performance-first organisations because those people don’t show up in your retargeting audiences. They don’t click your branded search ads. They don’t convert on your email flows. They’re invisible to your measurement stack precisely because they haven’t interacted with you yet.

The only way to reach them is to go to where they are, not where your existing customers are. That means broadcast in some form, whether that’s traditional media, social video, creator partnerships, or any channel that distributes your message to people who didn’t ask for it. That’s the original definition of advertising, and it still works, when it’s done with enough clarity and consistency to actually land.

Creator-led campaigns, for example, are interesting precisely because they reach audiences that brands can’t easily access through owned channels. Later’s work on creator-led go-to-market approaches highlights how this kind of reach can drive conversion when the creative and the audience match well. The mechanism is the same as traditional advertising: you’re introducing your brand to people who weren’t already looking for you.

The challenge is that this kind of advertising is harder to measure precisely. You can track downstream conversion. You can run brand lift studies. You can use matched market tests. But you can’t produce a clean cost-per-acquisition number that makes the CFO comfortable in the same way a paid search report does. That measurement gap is real. The answer to it is honest approximation, not false precision.

The Honest Version of What Your Metrics Are Telling You

I’m not suggesting you ignore your performance metrics. I’m suggesting you read them correctly.

When your paid search CPA is low, that tells you that people who were already looking for what you sell are finding you efficiently. That’s good. It doesn’t tell you why those people were looking, or how many more people could be looking if your brand was more present in their lives before the search moment.

When your retargeting ROAS is strong, that tells you that people who already showed interest are converting at a reasonable cost. That’s good. It doesn’t tell you whether those people would have converted anyway through direct traffic or organic search, or whether you’re paying to claim credit for a sale that was already decided.

When your overall revenue is growing, that’s the number that matters. But the attribution model that tells you which channel caused that growth is a perspective on reality, not reality itself. Treat it as one input among several, not as the definitive answer.

Forrester’s intelligent growth model framework makes a similar point about the relationship between measurement sophistication and strategic clarity. Better data doesn’t automatically produce better decisions. The decisions depend on how you interpret the data, and interpretation requires a theory of how your market works, not just a dashboard.

What Good Advertising Measurement Actually Looks Like

The businesses that measure advertising well tend to do a few things differently from the ones that don’t.

They separate demand creation metrics from demand capture metrics. They don’t use the same framework to evaluate a brand awareness campaign and a retargeting campaign, because those campaigns are doing fundamentally different things. Applying a CPA lens to brand advertising is like judging a window display by how many people it converted in the shop. It’s not the right question.

They invest in incrementality testing, even when it’s inconvenient. Running holdout groups means some potential customers don’t see your ads. That’s a short-term cost. The benefit is that you actually know whether your advertising is working, rather than assuming it is because the attribution model said so.

They track brand health metrics alongside commercial metrics. Awareness, consideration, preference, these are leading indicators of future revenue. They move slowly and they’re harder to tie to specific campaigns, but they tell you whether you’re building the kind of brand presence that makes performance marketing efficient over time.

And they’re honest about what they don’t know. The most commercially sophisticated marketing leaders I’ve worked with are the ones who can say “we think this is working based on the following evidence, and consider this we’re not certain about.” That’s a harder conversation to have than presenting a clean attribution report, but it’s the one that leads to better decisions.

Tools like Semrush’s analysis of growth approaches and the tools that support them can be useful for identifying where demand exists and how to reach it efficiently. The caveat, as always, is that these tools measure what’s measurable. The unmeasured parts of your market are often where the growth is.

Does Advertising Work?

Yes. Advertising works. It works when it reaches people who weren’t previously considering your brand and changes their mental availability. It works when it builds the kind of familiarity and preference that makes someone choose you over a competitor at the moment of purchase, even when your product isn’t obviously superior. It works when it creates the demand that performance marketing then captures.

What advertising doesn’t always do is show up cleanly in your attribution model. The sale it caused might be attributed to a branded search click that happened three weeks later. The customer it created might not appear in your data until their second purchase. The brand preference it built might be invisible until the moment a competitor starts spending heavily and your organic share doesn’t collapse the way it would have if you hadn’t been investing.

The question “does advertising work?” is almost always the wrong question. The right questions are: which advertising, for which audience, creating which kind of effect, and how would we know? Those questions are harder to answer, but they’re the ones that lead to better marketing decisions and better business outcomes.

My first week at Cybercom, I found myself holding a whiteboard pen in a brainstorm for Guinness after the founder had to leave for a client meeting. He handed it to me mid-session. My internal reaction was something close to panic. But the thing that got me through it wasn’t expertise in stout or some clever creative technique. It was the ability to ask the right question: what are we actually trying to change in how people think about this brand? Everything else followed from that. It’s still the question I start with, twenty years later.

If you’re building a go-to-market approach that takes both demand creation and demand capture seriously, the thinking in the Go-To-Market and Growth Strategy hub covers the strategic frameworks that connect advertising investment to commercial outcomes across the full funnel.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

Does advertising actually drive sales or just capture existing demand?
Both, depending on the type of advertising. Performance advertising, such as paid search and retargeting, primarily captures demand that already exists. Brand advertising creates new demand by reaching people who were not previously considering your product. Businesses that only invest in performance marketing are harvesting a crop they didn’t plant, and the harvest shrinks over time if nothing is being grown upstream.
Why does attribution make advertising look more effective than it is?
Attribution models assign credit to touchpoints within a recorded customer experience. They can only measure interactions that were tracked, and they cannot account for what would have happened without a given touchpoint. This means they systematically over-credit the last touchpoints before conversion, typically performance channels, and under-credit the brand-building activity that created the intent to buy in the first place.
What is incrementality testing and why does it matter?
Incrementality testing measures whether advertising caused sales that would not have happened without it. The most common method is running a holdout group, a segment of your audience that doesn’t see your ads, and comparing their conversion behaviour to those who did. It’s more operationally complex than standard attribution reporting, but it’s the closest thing available to a real answer on whether your advertising is working.
How should brand advertising be measured if not by CPA?
Brand advertising should be measured against brand health metrics, including awareness, consideration, and preference, alongside longer-term commercial indicators such as organic search volume, direct traffic trends, and category share. Geo-based matched market tests and brand lift studies can also provide evidence of causal impact. The goal is honest approximation of business effect, not a precise CPA number that creates false confidence.
What happens to performance marketing costs when brand investment is cut?
When brand investment is cut, the pool of people who are already aware of and interested in your brand gradually shrinks. This means performance marketing has fewer high-intent users to capture, so costs typically rise over time as you compete harder for a smaller audience. The lag between cutting brand spend and seeing performance costs increase is usually long enough that businesses don’t connect the two, which makes it a recurring and expensive mistake.

Similar Posts