Logical Fallacies in Advertising Are Costing You Good Decisions
Logical fallacies in advertising are flawed reasoning patterns that lead marketers to draw wrong conclusions from real data, plausible-sounding arguments, or convincing anecdotes. They show up in briefs, budget reviews, creative rationales, and boardroom presentations, and most of the time nobody flags them because they sound completely reasonable on the surface.
The problem is not that marketers are irrational. The problem is that the industry has built a culture of confident assertion, where whoever speaks most convincingly often wins the room, regardless of whether the logic holds up. That is expensive, and it is avoidable.
Key Takeaways
- Logical fallacies in advertising are not edge cases. They appear in everyday decisions around budget allocation, creative approval, and channel strategy.
- Post hoc reasoning, where correlation is mistaken for causation, is one of the most common and costly errors in performance marketing.
- Appeal to authority and bandwagon thinking push brands toward imitation rather than differentiation, which is the opposite of what drives growth.
- Survivorship bias distorts how marketers learn from case studies, because the failures that would balance the picture are rarely published.
- Recognising a fallacy in the room does not require a philosophy degree. It requires the confidence to ask one clear question: what is the actual evidence for that claim?
In This Article
- Why Logical Fallacies Thrive in Marketing Environments
- Post Hoc Reasoning: The Fallacy That Performance Marketing Built Its Reputation On
- Appeal to Authority: When “They Did It” Becomes a Strategy
- The Bandwagon Fallacy: Why Everyone Doing It Is Not a Good Reason
- Survivorship Bias: The Case Studies You Never See
- False Dilemma: When the Choice Is Presented as Binary
- Ad Hominem and Appeal to Emotion: The Fallacies in Creative Approvals
- Hasty Generalisation: Drawing Big Conclusions From Small Samples
- How to Build a Team Culture That Catches These Errors
- The Commercial Cost of Letting Fallacies Run
I have been in rooms where a campaign got approved because the CEO liked it, where a channel got cut because the last three months looked weak, and where a competitor’s tactic got copied wholesale because “they must know something we don’t.” All of those decisions had a logical fallacy at the centre. None of them were obvious at the time.
Why Logical Fallacies Thrive in Marketing Environments
Marketing is a discipline that runs on persuasion, which makes it unusually vulnerable to bad reasoning. The same skills that make someone a good copywriter or a compelling presenter, the ability to construct a narrative, to make a connection feel inevitable, also make it easier to dress up flawed logic in professional language.
Add to that the commercial pressure most marketing teams operate under. Decisions need to be made fast. Data is incomplete. Attribution is messy. And there is always someone in the meeting who has a strong opinion and a slide deck to back it up. In that environment, the path of least resistance is to accept the argument that sounds most credible, not the one that is most logically sound.
I spent several years growing an agency from around 20 people to over 100, and one of the things I noticed consistently was that the quality of decisions did not improve automatically with the size of the team. What improved was the confidence with which bad decisions got made. More people, more data, more process, and still the same underlying reasoning errors, just packaged more professionally.
If you want to get sharper on the broader strategic context, the Go-To-Market and Growth Strategy hub covers the frameworks and thinking that sit behind decisions like these. This article focuses specifically on the reasoning errors that undermine those decisions before they even get made.
Post Hoc Reasoning: The Fallacy That Performance Marketing Built Its Reputation On
Post hoc ergo propter hoc. After this, therefore because of this. It is the assumption that because B followed A, A must have caused B. And it is everywhere in digital advertising.
Someone clicks a retargeting ad and converts. The platform reports a conversion. The team celebrates the ROAS. But the honest question nobody asks is: would that person have converted anyway? They had already visited the website. They had already shown intent. The ad may have done nothing except appear in the path and claim the credit.
Earlier in my career, I overvalued lower-funnel performance marketing for exactly this reason. The numbers looked clean. The attribution was trackable. The case for continued investment was easy to make. It took time, and a few honest conversations with clients about why their overall business was not growing despite strong ROAS numbers, to realise that much of what performance was being credited for was going to happen anyway. We were capturing intent that already existed, not creating new demand.
Think about a clothes shop. Someone who picks something up and tries it on is far more likely to buy than someone who walks past the rail. But if the shop credited every sale to the fitting room, they would stop investing in the window display, the store layout, and the brand that brought the customer through the door in the first place. That is what post hoc reasoning does to marketing budgets over time. It shifts money toward the last touchpoint and starves everything upstream.
Tools like those covered in Semrush’s overview of growth tools can help you see patterns across your funnel. But no tool removes the need to interrogate whether correlation in your data reflects causation in the real world.
Appeal to Authority: When “They Did It” Becomes a Strategy
The appeal to authority fallacy is the assumption that a claim is correct because a credible person or organisation said it. In advertising, this shows up in two distinct ways: citing industry experts to justify a strategic direction, and copying what a successful competitor is doing on the basis that they must be right.
Neither is inherently wrong. Expertise matters. Competitive intelligence is useful. The fallacy is in treating authority as a substitute for evidence rather than a prompt to investigate further.
I judged the Effie Awards, which are specifically designed to recognise marketing effectiveness rather than creative craft alone. What struck me going through the entries was how often the most awarded campaigns had done something that contradicted conventional wisdom for their category. They had not followed the authority. They had challenged it, tested a different hypothesis, and measured what actually happened.
The brands that tend to imitate the category leader rarely close the gap. They reinforce it, because they are playing by someone else’s rules on someone else’s turf. The appeal to authority fallacy, when applied to competitive strategy, is a reliable path to being a permanent number two.
Frameworks like the Forrester intelligent growth model are worth understanding precisely because they push back on assumption-led thinking and ask for evidence before commitment. That is not anti-authority. That is how authority should be used, as a starting point for thinking, not an endpoint for decision-making.
The Bandwagon Fallacy: Why Everyone Doing It Is Not a Good Reason
Closely related to appeal to authority is the bandwagon fallacy: the argument that because many people or many brands are doing something, it must be the right thing to do. In marketing, this manifests as trend-chasing, and it is one of the most expensive habits the industry has.
When short-form video exploded, brands that had no business being on TikTok built TikTok strategies. When creator marketing became the dominant conversation, every brief suddenly required influencer integration regardless of whether the audience or the product made that sensible. The logic was always the same: everyone else is doing it, so we need to be doing it too.
The problem is that “everyone is doing it” is a description of market saturation, not market opportunity. By the time a tactic is widespread enough to be cited as proof of its value, the early-mover advantage has already been captured by someone else. You are not pioneering. You are following, and you are paying full price to do it.
That does not mean ignoring emerging channels. There is genuine value in understanding how creator-led approaches are evolving, and resources like Later’s work on go-to-market with creators show how thoughtful execution in that space can drive real outcomes. The distinction is between adopting a channel because the evidence suggests it fits your audience and your objectives, versus adopting it because the industry is excited about it.
When I ran agencies, the clients who made the best decisions were the ones who asked “why would this work for us specifically?” before they asked “how do we do this?” The bandwagon fallacy collapses that sequence. It starts with the tactic and works backward to a justification, which is the opposite of how strategy is supposed to function.
Survivorship Bias: The Case Studies You Never See
Survivorship bias is the tendency to focus on examples that succeeded while ignoring the far larger number that failed, because the failures are not visible. In advertising, this is endemic to how the industry learns.
Award shows celebrate the campaigns that worked. Trade press covers the launches that landed. Conference speakers tell the story of the pivot that saved the business. Nobody publishes the case study about the brand that tried the same approach and lost six months and a significant budget finding out it did not work for them.
The result is a systematically distorted picture of what is effective. Marketers are drawing conclusions from a sample that has been filtered by success, which makes those conclusions far less reliable than they appear. If you see ten case studies of brands that succeeded with a particular strategy, you have no idea whether you are looking at ten successes from eleven attempts or ten successes from two hundred attempts. The denominator is invisible.
I have turned around loss-making businesses, and one of the first things I learned in that context is that the post-mortems are more valuable than the victory laps. Understanding what did not work, and why, is more useful for decision-making than cataloguing what did. But the industry does not reward post-mortems. It rewards confident narratives about success, which is exactly the condition that lets survivorship bias thrive.
BCG’s research on scaling agile practices makes a similar point in a different context: the organisations that scale well are the ones that build feedback mechanisms that surface failure signals early, not the ones that wait for a success story to emerge and then try to replicate it.
False Dilemma: When the Choice Is Presented as Binary
The false dilemma fallacy presents a situation as having only two possible options when more exist. In marketing strategy, this shows up constantly in budget and channel debates.
Brand versus performance. Awareness versus conversion. Long-term versus short-term. These are not binary choices. They are trade-offs that exist on a spectrum, and the framing of them as either/or is almost always a sign that someone is trying to win an argument rather than solve a problem.
The brand versus performance debate is the one I have sat in most often, usually with a CFO on one side and a brand director on the other, both presenting the choice as existential when the real question is how to allocate across both in a way that serves the business. The false dilemma framing makes that question impossible to answer because it has already ruled out the answer before the discussion starts.
BCG’s work on go-to-market strategy and pricing is a useful reminder that commercial decisions rarely reduce to two options. The complexity is the point. Forcing a binary choice is a way of making a complex decision feel manageable, but it does so by removing the nuance that would lead to a better outcome.
When someone presents you with a binary choice in a marketing context, the right response is almost always to ask what the third option is. There is almost always one.
Ad Hominem and Appeal to Emotion: The Fallacies in Creative Approvals
Not all logical fallacies in advertising are about strategy and data. Some of the most damaging ones happen in creative development and approval processes.
Ad hominem is the fallacy of attacking the person making an argument rather than the argument itself. In creative reviews, this often looks like dismissing an idea because of who pitched it, or accepting it because of who is in the room. “The CEO loves it” is not a creative brief. Neither is “the junior team came up with it, so let’s get a second opinion.” The idea should be evaluated on its merits, not on the status of its originator.
Early in my career, I was handed the whiteboard pen in a Guinness brainstorm when the agency founder had to leave for a client meeting. The internal reaction in the room, including my own, was something close to panic. But the work that came out of that session was good, not because of who was holding the pen, but because the problem was clear and the thinking was sound. Status in the room had nothing to do with it. The ad hominem instinct, to weight ideas by their source rather than their substance, would have killed it before it started.
Appeal to emotion is the mirror image: the argument that a piece of creative must be right because it makes people feel something. Emotional resonance matters enormously in advertising. But “it made me cry in the review” is not a measure of effectiveness. It is a measure of craft. The two are related but not the same, and confusing them leads to creative decisions that look beautiful and do nothing for the business.
Hasty Generalisation: Drawing Big Conclusions From Small Samples
Hasty generalisation is the fallacy of drawing a broad conclusion from a sample that is too small, too narrow, or too unrepresentative to support it. In advertising, this is the error behind most bad creative testing, most misread campaign results, and most premature channel conclusions.
A campaign runs for three weeks. The results look weak. The channel gets cut. But three weeks in a category with a long consideration cycle tells you almost nothing about the channel’s actual contribution. The sample is too small and the time horizon is too short to draw the conclusion that was drawn.
Equally common: a focus group of twelve people dislikes a creative direction, and the campaign gets shelved. Or a social post goes viral and the team concludes they have cracked the content formula. Both are hasty generalisations. Both lead to decisions that would not survive basic scrutiny if the reasoning were written out plainly.
The challenge in marketing is that the pressure to make decisions often outpaces the availability of sufficient data. That is a real constraint, and it means decisions sometimes have to be made on incomplete information. But there is a difference between acknowledging that you are working with limited data and pretending that limited data is conclusive. The fallacy is in the pretending.
Approaches like those discussed in Crazy Egg’s work on growth methodology are useful partly because they build iterative testing into the process, which is a structural way of guarding against hasty generalisation. You do not need a perfect dataset before you act. You need a commitment to updating your conclusions as better data comes in.
How to Build a Team Culture That Catches These Errors
Identifying logical fallacies in the abstract is straightforward. Catching them in real-time, in a meeting where someone senior is making the argument, is considerably harder. It requires a team culture that treats rigorous thinking as a professional value rather than a challenge to authority.
A few things that have worked in practice:
First, normalise the question “what would have to be true for this to be right?” It is not an aggressive question. It is a clarifying one. It forces the person making the argument to articulate their assumptions, which is where most fallacies live.
Second, separate the evaluation of ideas from the evaluation of the people who proposed them. This requires explicit process, not just good intentions. If the same people always win the creative review, that is a signal that the evaluation criteria are social rather than substantive.
Third, build post-mortems into the workflow, not just post-launch reviews. A post-mortem asks what you got wrong in your assumptions, not just what the results were. That is the mechanism that catches survivorship bias and hasty generalisation before they calcify into institutional belief.
Fourth, be honest about what your data can and cannot tell you. Vidyard’s research on pipeline and revenue potential for GTM teams is a useful example of how to frame data claims carefully, distinguishing between what the numbers show and what they imply. That distinction matters, and most marketing data presentations do not make it.
The broader context for all of this sits in how you approach growth strategy as a discipline. If you are building or refining your approach, the Go-To-Market and Growth Strategy hub covers the strategic frameworks that give these decisions a proper foundation.
The Commercial Cost of Letting Fallacies Run
None of this is abstract. Logical fallacies in advertising have a direct commercial cost, and it compounds over time.
Post hoc reasoning redirects budget from brand-building to demand-capture, which is fine until the demand runs out and there is no brand equity to regenerate it. Survivorship bias leads teams to copy tactics that worked in a specific context for a specific competitor and wonder why they do not work for them. The bandwagon fallacy burns budget on channels that are saturated before the brand has even established itself on channels that might actually work.
The Forrester analysis of go-to-market challenges in complex categories is a useful illustration of what happens when strategic decisions are made on the basis of flawed reasoning rather than evidence. The symptoms look like execution problems. The cause is usually upstream, in the quality of the thinking that shaped the strategy.
I have managed hundreds of millions in ad spend across more than thirty industries. The campaigns that consistently underperformed were rarely the ones with weak creative or insufficient budget. They were the ones built on a flawed premise that nobody had challenged early enough. A logical fallacy at the brief stage does not get corrected by better execution at the campaign stage. It gets amplified.
The most commercially valuable skill in a marketing team is not creativity, not data fluency, not channel expertise. It is the willingness to say, in a room where everyone else is nodding, “I am not sure that reasoning holds up.” That is not cynicism. That is the job.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
