Kool-Aid Advertising Is Costing You More Than You Think
Kool-Aid advertising is what happens when a marketing team starts believing its own story so completely that it stops questioning whether the strategy is actually working. It is the organisational equivalent of drinking the pitcher dry: everyone is enthusiastic, the messaging feels coherent, and the results look fine on the dashboard you built to confirm what you already believed.
The problem is not enthusiasm. Enthusiasm is useful. The problem is when shared belief becomes a substitute for honest scrutiny, and when the internal consensus around a campaign or a channel or a brand narrative quietly displaces the external evidence that should be driving decisions.
This happens more often than most marketing leaders would admit, and it tends to accelerate the longer a team stays together, the more successful they have been in the past, and the more pressure there is to show growth quickly.
Key Takeaways
- Kool-Aid advertising occurs when internal conviction replaces external evidence as the basis for marketing decisions.
- The most dangerous version is not obvious groupthink, but the quiet drift where measurement systems are gradually shaped to confirm what the team already believes.
- Performance channels are particularly vulnerable because their dashboards create an illusion of objectivity that can mask demand capture masquerading as demand creation.
- The antidote is not scepticism for its own sake, but building deliberate friction into how marketing decisions get made and evaluated.
- Teams that have been successful before are more susceptible, not less, because past results become the lens through which new evidence gets filtered.
In This Article
- What Does Kool-Aid Advertising Actually Look Like?
- Why Performance Marketing Is Especially Vulnerable
- The Confidence Problem: When Past Success Becomes a Liability
- How Groupthink Gets Dressed Up as Strategy
- The Measurement Trap Inside the Kool-Aid Problem
- What Kool-Aid Advertising Costs You in Practice
- How to Build a Team That Challenges Its Own Assumptions
- The Harder Question Behind All of This
What Does Kool-Aid Advertising Actually Look Like?
It rarely announces itself. Nobody in a planning meeting says “I think we should stop questioning our assumptions and just run the same strategy harder.” What actually happens is subtler and more structural than that.
A campaign performs well. The team builds a narrative around why it worked. That narrative gets repeated in internal presentations, in client QBRs, in award entries. Over time, the narrative becomes the lens through which new briefs get written and new results get interpreted. Evidence that fits the story gets amplified. Evidence that complicates it gets explained away or quietly deprioritised.
I have watched this happen in agencies and in client-side marketing departments. One of the clearest patterns I have seen is what I would call the dashboard problem: teams build reporting structures that measure what they expect to see, and then use those reports as proof that the strategy is working. If your measurement framework was designed by the same people who designed the strategy, you are not measuring performance, you are measuring alignment with a prior belief.
When I was judging the Effie Awards, I saw the inverse of this problem play out in the entries. The strongest submissions were the ones where the team had clearly stress-tested their own logic, where the results section acknowledged what had not worked and explained why the overall campaign still drove business outcomes. The weakest entries were the ones where every metric told a clean, consistent story with no friction anywhere. Real marketing almost never looks like that.
If you are working through broader questions about how go-to-market strategy gets built and where it tends to break down, the Go-To-Market and Growth Strategy hub covers the full landscape, from audience strategy to channel planning to measurement.
Why Performance Marketing Is Especially Vulnerable
Performance marketing has a particular Kool-Aid problem, and it is one I spent years contributing to before I understood what was actually happening.
Earlier in my career, I placed enormous weight on lower-funnel performance metrics. Conversion rates, cost per acquisition, return on ad spend. These numbers felt like ground truth because they were precise and they were real. What I gradually came to understand is that precision is not the same as accuracy. You can measure something very precisely and still be measuring the wrong thing.
A significant portion of what performance marketing gets credited for would have happened anyway. Someone who has already decided to buy your product and searches for your brand name is not being converted by your paid search ad. They were already converted. You are paying to be visible at the moment of a decision that was made upstream, often by brand activity that never shows up cleanly in the performance dashboard.
The Kool-Aid version of this is a marketing team that sees strong lower-funnel numbers, concludes that performance spend is driving growth, and gradually shifts budget away from brand-building activity. The numbers continue to look good for a while, because you are still capturing the existing demand that brand activity previously created. Then, slowly, the pipeline starts to thin. New audiences are not being reached. The pool of people who already know and trust you is not being replenished. By the time the numbers reflect this, the damage has been accumulating for months.
This is not a theoretical risk. Research into GTM pipeline health consistently points to untapped potential sitting outside the audiences that performance teams are already reaching. The demand you are capturing is not the same as the demand you could be creating.
The Confidence Problem: When Past Success Becomes a Liability
There is a version of Kool-Aid advertising that is particularly hard to diagnose because it comes wrapped in genuine, earned credibility. Teams that have been successful before are often more susceptible to this pattern, not less, because past results become the filter through which new evidence gets processed.
When I was building out the team at iProspect, growing from around 20 people to over 100 and moving the business from loss-making to one of the top-performing agencies in the market, there was a period where the things that had driven early growth started to become orthodoxy. The approaches that had worked became the default, and the default became the identity. Questioning those approaches started to feel, internally, like questioning the success itself.
That is a dangerous place for any marketing organisation to be. The market does not care about your track record. It responds to what you are doing now, for audiences that may be different from the ones you built your reputation with, in a competitive environment that has almost certainly shifted.
The antidote is not cynicism about what has worked before. It is building deliberate friction into how decisions get made. Requiring the team to articulate not just why a strategy should work, but what would have to be true for it to fail. Treating strong past performance as a hypothesis to be tested, not a conclusion to be defended.
BCG’s work on scaling agile practices makes a related point about organisational decision-making: the structures that help teams move fast can also insulate them from the signals that would prompt them to change direction. Speed and rigour are not natural allies, and most marketing teams are better at the former than the latter.
How Groupthink Gets Dressed Up as Strategy
One of the more uncomfortable things about Kool-Aid advertising is that it often looks like good strategic alignment from the outside. The team is coherent. The messaging is consistent. Everyone is pulling in the same direction. These are things that marketing leadership is supposed to produce, and they are genuinely valuable. The problem is when alignment becomes a proxy for correctness.
I have sat in enough agency brainstorms and client strategy sessions to recognise the moment when a room stops generating ideas and starts performing enthusiasm for the idea that already has the most momentum. It is not always obvious. It does not always look like suppression. It can look like energy, like a team that is excited and aligned and ready to execute. The tell is usually what does not get said: the uncomfortable question that nobody asks, the data point that gets glossed over, the competitor move that does not get factored in.
Early in my career, I was handed a whiteboard pen in the middle of a Guinness brainstorm when the founder had to leave for a client meeting. My internal reaction was something close to panic, because the room was full of people who had been working on the account for years and had very clear views about what good looked like. The temptation was to facilitate the existing consensus rather than push on it. What I learned from that experience is that the most valuable thing you can do in a room full of smart, experienced people who all agree with each other is to ask the question they are all quietly avoiding.
That is easier to say than to do, particularly when the consensus is backed by genuine expertise and genuine results. But the alternative is a strategy that has been optimised for internal coherence rather than external effectiveness.
The Measurement Trap Inside the Kool-Aid Problem
Measurement is where Kool-Aid advertising does its most structural damage, because measurement shapes what gets done next. If your reporting framework is designed to confirm your strategy rather than interrogate it, you are not measuring performance, you are manufacturing confidence.
This is not just a problem of vanity metrics, though vanity metrics are part of it. It is a deeper problem of attribution and framing. When you decide which metrics to track, which time windows to use, which audiences to include in your analysis, and which comparisons to make, you are making choices that will systematically favour certain conclusions over others. Most of those choices get made once, at the beginning of a campaign or a planning cycle, and then they become the permanent infrastructure through which all subsequent results get interpreted.
I have managed hundreds of millions in ad spend across more than 30 industries, and one of the most consistent patterns I have seen is the gap between what a client’s reporting shows and what is actually happening in the market. Not because anyone is being dishonest, but because the reporting was built to answer the questions the team already had, rather than the questions the business actually needed answered.
Tools like Hotjar and similar behavioural analytics platforms can add a useful layer of qualitative signal that sits outside the standard performance dashboard. But even these tools only show you what you point them at. The measurement problem is not primarily a technology problem. It is a framing problem, and framing is a human decision.
Forrester’s perspective on organisational agility and decision-making touches on something relevant here: the teams that scale most effectively are the ones that build feedback loops which surface uncomfortable information quickly, rather than filtering it through layers of existing belief. That is as true for marketing measurement as it is for product development.
What Kool-Aid Advertising Costs You in Practice
The costs are real and they compound over time, but they are not always immediately visible on the P&L, which is part of what makes this pattern so persistent.
The first cost is opportunity cost. When a team is fully committed to a particular strategy or channel mix, it tends not to experiment seriously with alternatives. The experiments that do happen are usually underfunded and under-supported, which means they fail to produce results, which confirms the original belief that the existing approach is the right one. It is a self-sealing logic.
The second cost is competitive exposure. Markets change. Audiences shift. Competitors find new ways to reach people you thought were yours. A team that is drinking its own Kool-Aid is usually the last to notice these shifts, because the signals get filtered through a framework that was built to confirm the status quo.
The third cost is talent. Good marketers, the ones with genuine critical instincts, tend to leave organisations where those instincts are not welcomed. What remains is a team that is very good at executing the existing playbook and increasingly poorly equipped to question it.
There is a useful parallel in how growth strategy practitioners think about the difference between optimisation and exploration. Optimisation is about getting better at what you are already doing. Exploration is about finding out whether what you are doing is the right thing at all. Most marketing teams are structurally incentivised toward optimisation and structurally resistant to exploration. Kool-Aid advertising is what happens when that imbalance goes unchecked for long enough.
How to Build a Team That Challenges Its Own Assumptions
This is the practical question, and it does not have a clean answer, because the problem is partly structural and partly cultural and the two are deeply intertwined.
On the structural side, the most effective thing I have seen is separating the people who design strategy from the people who evaluate it. Not permanently and not completely, but with enough deliberate distance that the evaluation is not just a post-hoc rationalisation of the original brief. Pre-mortems, where a team works backwards from an assumed failure to identify what could have caused it, are more useful than post-mortems for exactly this reason. They introduce friction before commitment, when it can still change the outcome.
On the cultural side, it starts with what leadership rewards. If the people who raise uncomfortable questions get marginalised and the people who perform enthusiasm get promoted, you will get a team full of enthusiasts. That is not a hiring problem. It is a leadership problem.
Vidyard’s analysis of why go-to-market execution feels harder than it used to points to a related dynamic: the increasing complexity of buying journeys means that teams relying on simplified mental models of how customers behave are going to be wrong more often, and wrong in ways that their existing measurement frameworks will not catch. The answer is not more data. It is better questions.
Creator-led and community-based approaches to go-to-market, like those explored in Later’s work on creator-led campaigns, are partly interesting because they introduce external voices and external audiences into the marketing process. That external signal is valuable not just for reach, but as a check on the internal echo chamber that forms around any sustained campaign.
The pricing and go-to-market research from BCG on long-tail pricing and market strategy makes a point that applies more broadly: the assumptions baked into a go-to-market strategy at the design stage tend to persist long after the market conditions that justified them have changed. Building in regular, structured reviews of those assumptions is not optional. It is the work.
If you are thinking seriously about how to build marketing strategy that holds up under scrutiny rather than just internal consensus, the broader collection of frameworks and perspectives at The Marketing Juice’s growth strategy hub is worth working through. The common thread across those articles is the same one that matters here: commercial rigour over comfortable certainty.
The Harder Question Behind All of This
Kool-Aid advertising is in the end a question about what marketing leadership is actually for. If it is for building confidence and maintaining momentum, then shared belief is a feature. If it is for driving business outcomes, then shared belief is only valuable when it is grounded in honest evidence.
Most marketing leaders would say the second thing. Most marketing organisations behave more like the first. The gap between those two positions is where Kool-Aid advertising lives.
The way out is not complicated, but it is uncomfortable. It requires being willing to question strategies that are working by conventional measures. It requires building measurement systems that are designed to surface problems, not confirm success. It requires creating space for the people in the room who are not drinking the pitcher to say so without professional consequence.
None of that is easy in an industry that rewards confidence and punishes doubt. But the alternative is a marketing function that is very good at convincing itself and progressively less good at convincing the people who actually matter: the customers who have no investment in your internal narrative and no reason to care how good your dashboard looks.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
