Advertising Awards Don’t Prove Your Marketing Worked
Advertising awards are not a measure of marketing effectiveness. They are a measure of how well an agency can write an entry form. The two things are related far less often than the industry pretends, and after 20 years of running agencies, managing budgets, and sitting on judging panels, I’ve stopped pretending otherwise.
That’s not cynicism. It’s a pattern I’ve watched repeat itself across every major award circuit, including the ones that claim to prioritise effectiveness above all else. The problem isn’t the awards themselves. It’s what the industry has decided they mean.
Key Takeaways
- Award entries routinely confuse correlation with causation, and judges don’t always catch it.
- The campaigns that win awards are often not the campaigns that drove the most commercial growth.
- Agencies optimise for award-worthiness, which is a different brief than the client’s actual brief.
- Effectiveness frameworks like the Effies have real value, but only when the evidence is interrogated honestly.
- If your marketing strategy is shaped by what wins awards, you are optimising for the wrong audience.
In This Article
- What Advertising Awards Actually Measure
- The Causation Problem Nobody Wants to Talk About
- Why Agencies Are Structurally Incentivised to Chase Awards
- The Campaigns That Win Awards vs. the Campaigns That Work
- What Happens When Clients Start Optimising for Awards
- The Effie Problem: When Even the Effectiveness Awards Get It Wrong
- What Effective Marketing Actually Looks Like
- How to Use Awards Without Being Used by Them
- The Honest Version of This Conversation
What Advertising Awards Actually Measure
When I judged the Effie Awards, I went in with genuine respect for the process. The Effies are structured around effectiveness, not just creativity. Entrants are supposed to demonstrate that their work drove real business results. On paper, it’s the most commercially grounded award in the industry.
What I found in practice was more complicated. Some entries were genuinely impressive. Clear objectives, well-constructed evidence, honest acknowledgement of what could and couldn’t be attributed to the campaign. Those were the minority.
A larger portion fell into a pattern I came to think of as “narrative engineering.” The entry would establish a business problem, present the campaign, then point to a sales uplift or market share gain that happened in the same period. The implication was obvious: the campaign caused the result. But the entries rarely ruled out the alternative explanations. A competitor had a product recall. The category was growing anyway. The client also ran a significant price promotion during the same window. None of that made it into the entry.
Some entries were more deliberate than that. Data presented selectively. Timeframes chosen to flatter the numbers. Metrics swapped mid-argument when the primary KPI didn’t move in the right direction. I don’t think most of it was outright fraud. I think it was motivated reasoning, the same cognitive bias that affects all of us when we want something to be true.
But the effect is the same. Awards get handed out for work that hasn’t proved what it claims to have proved.
The Causation Problem Nobody Wants to Talk About
Marketing has a structural problem with causation. Most of what we measure is correlation, and the industry has quietly agreed to treat correlation as proof. Awards ceremonies are where that agreement gets formalised and celebrated.
A brand runs a campaign. Sales go up. The campaign wins an award for driving sales growth. But did the campaign cause the sales growth, or did it coincide with it? Answering that question properly requires a counterfactual: what would have happened without the campaign? Almost no award entry attempts to construct one rigorously.
The honest version of most effectiveness entries would read something like: “We ran a campaign, sales went up, and we believe the campaign contributed to that, but we cannot isolate its effect from the other variables in play.” That’s not a winning entry. So nobody writes it that way.
I’ve spent time with BCG’s thinking on commercial transformation and the conditions under which marketing investment actually drives growth. The consistent finding is that the relationship between marketing activity and business outcomes is real but complex, and it rarely maps cleanly onto a single campaign in a single quarter. The industry’s awards calendar doesn’t accommodate that complexity particularly well.
Why Agencies Are Structurally Incentivised to Chase Awards
I’ve run agencies. I understand the commercial logic of award entries. A shortlist mention in Cannes goes on the credentials deck. It goes in the new business pitch. It gets used to justify a rate card increase. Awards are currency in the agency world, and that’s not inherently wrong.
The problem is what happens to the work when awards become the target. Campaigns get shaped, at least partly, around what judges respond to rather than what clients need. Budgets get allocated to “award-able” work. Briefs get reframed. There’s a well-known dynamic in agencies where the creative team knows which projects have award potential and which ones are just good client work, and the energy flows accordingly.
I saw this clearly when I was growing an agency from around 20 people to closer to 100. The pressure to win awards was real, both externally from clients who wanted a “creative agency” and internally from staff who wanted to work on celebrated projects. But the campaigns that actually moved client businesses forward were often the unglamorous ones. The search overhaul that cut cost-per-acquisition by 40%. The email programme that recovered lapsed customers at scale. The attribution model rebuild that showed the client they were over-investing in display and under-investing in mid-funnel content. None of that wins at Cannes.
If you’re thinking about how marketing strategy connects to real commercial growth rather than industry recognition, the broader Go-To-Market and Growth Strategy hub covers the frameworks that actually move businesses forward.
The Campaigns That Win Awards vs. the Campaigns That Work
There’s a version of this argument that goes too far. Some award-winning campaigns are genuinely excellent and genuinely effective. The best Effie winners represent real rigour. Some Cannes Grand Prix campaigns have driven measurable business results. I’m not arguing that awards and effectiveness are mutually exclusive.
I’m arguing that they are independent variables, and the industry treats them as if they’re the same thing.
Think about the campaigns you remember from the last decade. The ones that won the most awards. Now think about the brands behind those campaigns. Are they the fastest-growing businesses in their categories? Sometimes. Often not. Meanwhile, there are companies with no award-winning campaigns to their name that have compounded growth for years through disciplined, unglamorous marketing: consistent positioning, well-targeted media, strong conversion architecture, and relentless measurement.
The growth hacking literature, for all its flaws, gets one thing right: the tactics that drive sustainable business growth are rarely the ones that make for a compelling case study. Semrush’s breakdown of growth hacking examples illustrates how the most effective growth programmes tend to be systematic and iterative rather than campaign-shaped and award-ready.
Awards are designed around campaigns. Growth is usually driven by systems. Those are different things.
What Happens When Clients Start Optimising for Awards
The most damaging version of award culture isn’t agencies chasing trophies. It’s clients who’ve internalised the idea that award-winning work is inherently better work.
I’ve sat in client meetings where the brief included something like “we want work that could win at Cannes.” That’s a real brief. It happens more than you’d think. And it creates an immediate tension, because “work that could win at Cannes” and “work that will grow our business” are not the same brief, and trying to satisfy both at once usually produces work that does neither particularly well.
When clients optimise for awards, they tend to approve riskier creative and more ambitious production values than the business case justifies. They greenlight campaigns that are interesting rather than campaigns that are right. And then, when the results don’t materialise, the agency gets blamed for not delivering commercial outcomes on a brief that was never really about commercial outcomes.
The underlying issue is one that Vidyard’s analysis of why go-to-market feels harder touches on: the gap between what marketing teams are measured on internally and what actually drives business results. When internal success metrics include award recognition, the work drifts away from commercial effectiveness almost by design.
The Effie Problem: When Even the Effectiveness Awards Get It Wrong
I want to be precise here, because I have genuine respect for what the Effies are trying to do. The framework is sound. The intent is right. Requiring entrants to demonstrate business results rather than just creative quality is a meaningful standard.
But the execution has gaps that the judging process doesn’t always close.
The first gap is verification. Award entries are self-reported. Judges read what agencies and clients choose to present. There is no independent audit of the data. When an entry claims that a campaign drove a 23% increase in brand consideration, judges have to take that on faith. Some entries include third-party research. Many don’t. The incentive to present the most favourable interpretation of the data is enormous, and there’s limited structural check on it.
The second gap is the causation problem I mentioned earlier. Even experienced judges, people who understand marketing research, can be drawn in by a well-constructed narrative. When the story is compelling and the numbers point in the right direction, it takes discipline to keep asking “but how do we know the campaign caused this?” Judging panels are busy. Entries are long. The path of least resistance is to accept the narrative.
The third gap is category selection. Entrants choose which category to enter. A campaign that underperformed on its primary objective might win in a secondary category where the bar is lower. The award still gets claimed as proof of effectiveness.
None of this makes the Effies worthless. It makes them a starting point for a conversation about effectiveness, not the conclusion of one.
What Effective Marketing Actually Looks Like
Early in my career, I was at an agency and found myself holding the whiteboard pen for a Guinness brainstorm when the founder had to step out for a meeting. The brief was for a campaign that would shift volume in a declining draught category. My first instinct was to think about what would be interesting, what would stand out, what would get people talking.
The more useful question, which took me longer to learn than I’d like to admit, was: what does someone need to think, feel, or do differently for Guinness to sell more pints? That question leads you somewhere different. It leads you toward behaviour, toward occasions, toward the specific moments where the brand can intervene in a purchase decision. It’s less glamorous than the question about what’s interesting. It’s more commercially useful.
Effective marketing tends to share certain characteristics. It starts with a clear commercial objective rather than a communications objective. It identifies the specific audience behaviour that needs to change. It chooses channels and formats based on where that audience actually is and what they respond to. It measures the right things, meaning the things that connect to the commercial objective rather than the things that are easy to measure. And it iterates based on what the data actually shows, not what the narrative requires.
That process rarely produces a single campaign moment that judges can evaluate in 20 minutes. It produces a programme of work that compounds over time. CrazyEgg’s overview of growth-focused marketing approaches captures something similar: the most durable growth comes from systematic improvement across the full customer experience, not from individual campaign peaks.
How to Use Awards Without Being Used by Them
Awards aren’t worthless. Used properly, they serve a few legitimate functions.
They can be a useful signal for talent. Creative people want to work in environments where their work is recognised. If winning awards helps attract and retain good people, that’s a real benefit, provided it doesn’t come at the cost of the work’s commercial effectiveness.
They can accelerate new business conversations. A shortlist in a credible category gets you into rooms you might not otherwise access. That’s commercially valuable for an agency, as long as the underlying capability is real and not just well-packaged.
And the better effectiveness awards, when entered honestly, can be a useful internal discipline. Writing a proper Effie entry forces you to articulate your objectives, your strategy, and your evidence in a structured way. Even if you don’t win, the process has value.
What awards shouldn’t do is substitute for actual commercial measurement. They shouldn’t be used as proof that your marketing worked when you haven’t done the work to establish causation. And they shouldn’t shape the brief before the work starts.
The BCG framework for go-to-market strategy makes a point that applies here: commercial transformation requires aligning marketing activity to business objectives at every stage, not just at the measurement phase. If award-worthiness enters the process at the brief stage, the alignment is already broken.
The distinction between marketing that builds genuine commercial momentum and marketing that looks impressive in a case study is one of the threads running through everything I write about on the Go-To-Market and Growth Strategy hub. If you’re trying to build the former, the frameworks there are worth your time.
The Honest Version of This Conversation
I’ve been in this industry long enough to know that the award circuit isn’t going anywhere. The incentives that sustain it are too deeply embedded: agency economics, talent attraction, client vanity, trade press coverage. It’s a system that serves the industry’s self-image, and the industry likes its self-image.
What can change is how individual marketers and marketing leaders relate to it. You can participate in the award circuit without letting it define your success criteria. You can enter work that you’re genuinely proud of without building your strategy around what judges will respond to. You can read award case studies as interesting stories rather than proven playbooks.
And you can hold yourself to a higher standard of evidence than most award entries require. When your campaign works, show why you think it worked, what the alternative explanations are, and why you’re discounting them. That’s harder than writing a compelling narrative. It’s also more useful, for your clients, for your own learning, and for the industry’s long-term credibility.
The marketers I respect most are not the ones with the most trophies. They’re the ones who can tell you precisely what their marketing did to the business, what it didn’t do, and what they’d do differently next time. That kind of clarity doesn’t win awards. It wins clients, and it wins results.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
