Fallacies in Advertising That Even Smart Marketers Believe
Fallacies in advertising are flawed assumptions that shape strategy, justify spend, and survive budget reviews, not because they are true, but because they are convenient. They get repeated often enough that they start to feel like received wisdom, and by the time someone challenges them, the budget has already been allocated around them.
Most of these fallacies are not obvious. They are dressed in the language of data, attribution models, and best practice. That is what makes them dangerous.
Key Takeaways
- Attribution models do not reveal where value was created. They reveal where it was measured, and those are very different things.
- Performance marketing captures existing demand more than it creates new demand. Conflating the two leads to chronic underinvestment in brand.
- Audience targeting precision can shrink the pool of people who ever hear your message, limiting growth before a campaign even launches.
- Last-click and even multi-touch attribution systematically over-credit channels that sit close to conversion and under-credit channels that built the intent in the first place.
- The most expensive fallacy in advertising is not a bad creative decision. It is a structural assumption that goes unexamined for years.
In This Article
- The Attribution Fallacy: Measuring Where Value Was Captured, Not Where It Was Created
- The Performance Fallacy: Confusing Demand Capture With Demand Creation
- The Targeting Fallacy: Precision That Shrinks Your Market
- The Frequency Fallacy: Assuming More Exposure Equals More Persuasion
- The Creative Fallacy: Treating Execution as Secondary to Targeting
- The Incrementality Fallacy: Assuming Every Conversion Was Caused by the Campaign
- The Consistency Fallacy: Assuming What Worked Before Will Keep Working
- The Measurement Fallacy: Treating Metrics as Objectives
- What to Do With These Fallacies
I have spent more than 20 years in agency leadership and performance marketing, including growing a team from 20 to over 100 people and managing hundreds of millions in ad spend across 30 industries. In that time, I have seen the same fallacies surface in boardrooms, strategy decks, and client briefs with remarkable consistency. They are not unique to any sector or budget level. They are structural problems in how the industry thinks.
If you are working through go-to-market strategy, channel selection, or growth planning, the broader context for these arguments lives in the Go-To-Market and Growth Strategy hub, where I cover the commercial mechanics behind sustainable growth.
The Attribution Fallacy: Measuring Where Value Was Captured, Not Where It Was Created
Attribution is probably the most consequential fallacy in modern advertising, because it is wrapped in data and therefore feels objective. The assumption is that because a channel appears in the conversion path, it contributed meaningfully to the sale. The further assumption, in last-click models, is that the final touchpoint deserves most or all of the credit.
Both assumptions are wrong in ways that cost companies real money.
When I was leading performance at iProspect, I watched attribution models consistently over-credit branded paid search. Someone had already decided to buy. They typed the brand name into Google. We captured the click, the model assigned the conversion, and the channel looked like a hero. But the intent was built somewhere else, by TV, by word of mouth, by a piece of content they read three weeks earlier. The paid search campaign did not create that intent. It was standing at the door when it arrived.
This is not a criticism of paid search. It is a criticism of treating attribution as proof of causation. The channel that sits closest to conversion will always look disproportionately effective in a last-click or even a weighted multi-touch model. That does not mean it is doing the heaviest lifting.
The practical consequence is chronic underinvestment in upper-funnel activity. If your model cannot see where awareness was built, it will not reward the channels that built it. Over time, those channels get cut. The pipeline looks fine for a while because there is residual demand in the system. Then growth stalls, and no one can quite explain why.
The Performance Fallacy: Confusing Demand Capture With Demand Creation
Earlier in my career, I overvalued lower-funnel performance channels. I was not alone. The industry was moving toward measurability, and measurability meant performance, and performance meant conversion. It felt rigorous. It felt accountable.
The problem is that a significant portion of what performance marketing gets credited for was going to happen anyway. The person was already in market. They had already formed an intent. The ad met them at the point of action, not at the point of decision.
Think about a clothes shop. Someone who tries something on is many times more likely to buy than someone who is just browsing. You could run an experiment, measure the conversion rate of people in the changing room, and conclude that the changing room is your most effective sales tool. You would not be wrong, exactly. But you would be missing the point. The changing room did not create the desire to buy. The window display did. The brand did. The recommendation from a friend did. The changing room just happened to be the last step before the till.
Performance marketing is the changing room. It is essential. But it does not replace the work that gets people through the door.
This fallacy is particularly acute in sectors where intent is high and the buying pool is finite. In B2B financial services marketing, for example, the addressable audience is often small and already well-covered by competitors. Hammering the bottom of the funnel harder does not expand the market. It just increases the cost of competing for the same small group of buyers who were already in play.
The Targeting Fallacy: Precision That Shrinks Your Market
Audience targeting has become increasingly granular, and the industry has largely treated this as an unambiguous good. More precision means less waste. Less waste means better efficiency. Better efficiency means better ROI.
The fallacy is in the assumption that the people outside your targeting parameters are not worth reaching.
When I judged the Effie Awards, one of the things that stood out in the strongest campaigns was how often the brief had been written broadly rather than narrowly. The brands that were growing were not just talking to their existing customers or even their most likely prospects. They were talking to people who had not yet formed a preference, people who were not yet in market but would be. That is where future customers come from.
Hyper-targeted campaigns are efficient at reaching people who are already predisposed to buy. They are poor at expanding the pool of people who might buy in the future. If your entire strategy is built around precision targeting, you are optimising for the harvest while neglecting the planting.
This connects to the broader argument about endemic advertising, where contextual relevance can do the work that demographic targeting attempts to do, often more effectively and without the audience shrinkage problem that comes with over-segmentation.
The practical test is simple: if you took your targeting parameters and applied them to the total addressable market, what percentage of potential future customers would you be reaching? If the answer is a very small fraction, you have a growth ceiling built into your media strategy.
The Frequency Fallacy: Assuming More Exposure Equals More Persuasion
There is an old assumption in advertising that exposure frequency drives persuasion, that the more times someone sees your message, the more likely they are to act. At some level this is true. A single impression rarely changes behaviour. But the relationship between frequency and persuasion is not linear, and treating it as though it is produces campaigns that annoy more than they convert.
I have seen this play out repeatedly in programmatic campaigns where frequency caps were either absent or set too high. The media team would point to reach and frequency numbers as evidence of a well-delivered campaign. The brand team would start getting calls from clients asking why their customers kept complaining about seeing the same ad ten times in a day. The two teams were measuring different things and neither was measuring what mattered.
Frequency has a diminishing return, and beyond a certain threshold it has a negative return. It damages brand perception. It trains audiences to ignore your creative. It wastes budget that could be reaching new people. The optimal frequency varies by category, creative quality, and media channel, but the fallacy is assuming that more is always better when the evidence consistently points in the other direction.
This is worth examining carefully during digital marketing due diligence, particularly when inheriting a media plan from a previous agency or in-house team. High frequency numbers often look like strong delivery. They are sometimes evidence of a campaign that has been burning budget on the same small audience repeatedly while reach has flatlined.
The Creative Fallacy: Treating Execution as Secondary to Targeting
The shift toward data-driven marketing has produced a secondary fallacy: the idea that if you get the targeting right, the creative matters less. The logic is that a mediocre message delivered to exactly the right person will still convert, because the intent is already there.
This is partially true at the very bottom of the funnel, where someone is already searching for what you sell and the ad is essentially a signpost. But it falls apart everywhere else.
Early in my career I was handed a whiteboard pen in a Guinness brainstorm at Cybercom when the founder had to leave for a client meeting. The room was full of people who knew the brand, knew the brief, and were perfectly capable of generating ideas. But the quality of what came out of that session was directly tied to the quality of the creative thinking in the room, not to how well we had defined the target audience. The audience was obvious. What was hard was finding something worth saying to them.
Creative quality is a multiplier on media efficiency. A strong piece of creative delivered to a broad audience will outperform a weak piece of creative delivered to a perfectly segmented one. The industry’s obsession with targeting precision has, in some cases, come at the direct expense of investment in creative, and the results show it.
When you are working through a website and sales strategy audit, one of the things worth examining is whether the creative assets being used in advertising are actually doing any persuasive work, or whether they are simply functional. The difference matters more than most attribution models will tell you.
The Incrementality Fallacy: Assuming Every Conversion Was Caused by the Campaign
This is the fallacy that sits beneath most performance marketing reporting, and it is rarely named directly. When a campaign reports 10,000 conversions, the implicit assumption is that those 10,000 conversions would not have happened without the campaign. That assumption is almost never tested.
Incrementality testing, where you measure the difference in conversion rates between exposed and unexposed groups, consistently shows that a meaningful proportion of conversions attributed to paid campaigns would have occurred organically. The percentage varies by channel, category, and brand strength, but it is rarely zero and sometimes it is surprisingly high.
This has direct implications for how you evaluate channel performance and how you allocate budget. A channel that reports 10,000 conversions but only drove 3,000 incremental ones is a very different investment proposition than the headline number suggests. The cost per incremental conversion is more than three times what the cost per attributed conversion implies.
In models like pay per appointment lead generation, this distinction is baked into the commercial structure. You pay for outcomes, so the question of incrementality is at least partially addressed by the pricing model. But in most standard media buying, the incrementality question is never asked, and the budget decisions that follow are built on an inflated view of channel contribution.
The Consistency Fallacy: Assuming What Worked Before Will Keep Working
Advertising strategies get locked in. A channel works, the team becomes expert in it, the reporting infrastructure is built around it, and over time it becomes the default. The fallacy is assuming that because something worked in a previous market condition, it will continue to work as conditions change.
Markets mature. Competitors enter. Algorithms change. Audiences shift. The strategy that built a business from zero to a certain scale is often not the strategy that takes it to the next level, because the conditions that made it work are different at scale. Reaching the first 10,000 customers is a different problem from reaching the next 100,000.
I have turned around loss-making agencies and inherited marketing functions where the strategy had not been meaningfully reviewed in years. The team was executing well against a plan that had stopped being fit for purpose. Nobody had noticed because the metrics being tracked were activity metrics, not outcome metrics. Impressions were up. Clicks were up. Revenue was flat.
For B2B technology companies in particular, this is a structural risk. The corporate and business unit marketing framework for B2B tech companies addresses how to build a structure that can adapt as the business scales, rather than one that calcifies around the tactics that worked at an earlier stage.
The discipline required is regular, honest review of whether the strategy is still fit for purpose, not just whether the execution is efficient. Efficient execution of the wrong strategy is still the wrong strategy.
The Measurement Fallacy: Treating Metrics as Objectives
The final fallacy is one that cuts across all the others: the confusion between what can be measured and what matters. Advertising generates enormous volumes of data, and the availability of that data creates a gravitational pull toward optimising for whatever is measurable, regardless of whether it is the right thing to optimise for.
Click-through rate is measurable. Engagement rate is measurable. Cost per lead is measurable. Brand preference is harder to measure. Long-term customer value is harder to measure. The contribution of advertising to pricing power is very hard to measure. So teams optimise for the measurable things and the harder-to-measure things get deprioritised.
The result is a systematic bias in advertising strategy toward short-term, bottom-funnel, easily attributable activity, and away from the kind of brand-building work that creates durable competitive advantage. The reason go-to-market feels harder than it used to for many teams is partly that they have spent years optimising for metrics that looked good in dashboards while the underlying brand equity was quietly eroding.
Measurement is essential. But measurement is a perspective on reality, not reality itself. The map is not the territory. The metric is not the outcome. Good advertising strategy requires being honest about what the data can and cannot tell you, and making judgment calls in the space where data runs out.
The BCG research on financial services go-to-market strategy makes a similar point about the limits of purely data-driven approaches in complex markets. The data tells you what happened. It does not always tell you why, and it rarely tells you what to do next.
What to Do With These Fallacies
Naming a fallacy is the first step. The harder work is building the organisational habits that prevent them from taking root in the first place.
That means asking incrementality questions before accepting attributed conversion numbers. It means reviewing audience targeting parameters against total addressable market size. It means holding attribution models to account by testing them against real-world outcomes. It means investing in creative quality as a strategic variable, not a production cost. And it means building a review cadence that challenges the strategy itself, not just the execution.
None of this requires abandoning data. It requires being honest about what the data is actually showing you, and what it is not.
The growth hacking literature is full of examples where teams optimised their way into a local maximum, a point where everything looked good in the dashboard but the business had stopped growing. The way out is usually not better optimisation. It is a more honest diagnosis of what is actually happening and why.
The broader frameworks for making these judgments, across channel strategy, go-to-market planning, and commercial measurement, are covered in depth in the Go-To-Market and Growth Strategy hub. If you are building or reviewing a strategy, that is a good place to stress-test the assumptions underneath it.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
