Marketing Agency Case Studies: What They Show vs. What They Prove
Marketing agency case studies are the industry’s most polished sales tool and its least reliable source of truth. At their best, they demonstrate genuine strategic thinking and measurable client outcomes. At their worst, they are retrospective narratives built around whatever numbers looked good, stripped of context, and designed to impress rather than inform.
If you are evaluating an agency, commissioning one, or writing one, understanding the difference between a case study that proves something and one that merely shows something is commercially essential.
Key Takeaways
- Most agency case studies prove correlation, not causation. A result that happened during a campaign is not automatically a result caused by that campaign.
- The strongest case studies define the problem before the solution, not the other way around. If the brief is absent, treat the whole document with suspicion.
- Attribution is the central weakness in almost every agency case study. Agencies rarely control all variables, and most never acknowledge that.
- A case study that omits what did not work is a marketing document, not an evidence base. Honest ones include the friction.
- When commissioning or writing case studies, the commercial outcome matters more than the creative or media achievement. Awards do not pay client invoices.
In This Article
- Why Agency Case Studies Exist and What They Are Actually For
- The Attribution Problem No One Wants to Talk About
- What a Credible Case Study Actually Contains
- How Agencies Use Case Studies in the Pitch Process
- The Difference Between Creative Awards and Effectiveness Evidence
- Writing Case Studies That Actually Build Trust
- What Prospective Clients Should Ask When Reading Agency Case Studies
- The Role of Case Studies in Longer-Term Agency Credibility
Why Agency Case Studies Exist and What They Are Actually For
I have been on both sides of this. I have written case studies to win new business, and I have read hundreds of them as a prospective client, a pitch judge, and someone evaluating agency partnerships at scale. The honest answer to what case studies are for is this: they are primarily a sales tool. That is not a criticism. It is just important context for how you read them.
When I was running agencies, we treated case studies as proof points in the new business process. They were designed to reduce perceived risk for a prospective client, to show that we had done something similar before and it had worked. The problem is that “worked” is a word that carries a lot of weight and almost never gets interrogated properly.
A case study that says “we increased organic traffic by 140% in six months” is telling you something. It is not necessarily telling you that the agency caused that increase, that the increase was sustained, that it translated into revenue, or that the same approach would work for your business in your category at your stage of growth. Those are the questions that matter, and they are almost never in the document.
If you want a broader view of how agencies operate, grow, and create value, the Agency Growth and Sales hub covers the commercial mechanics that sit behind the work, including how agencies price, pitch, and build sustainable client relationships.
The Attribution Problem No One Wants to Talk About
Earlier in my career I was deeply focused on lower-funnel performance metrics. I believed, genuinely, that if the numbers went up while we were running campaigns, we deserved credit. It took years of looking at the same patterns across enough clients and categories before I started questioning that assumption seriously.
The uncomfortable truth is that a meaningful proportion of what performance marketing gets credited for was going to happen anyway. Demand existed. People were already in market. The campaign captured that intent rather than creating it. That is not nothing, but it is a very different claim to “we grew your business.”
Case studies almost never grapple with this. They present a before and after. They show the metrics that improved. They draw a straight line from the agency’s work to the outcome, and they leave out every other variable that was moving at the same time: seasonality, competitor activity, product changes, pricing shifts, broader market conditions.
I judged the Effie Awards, which are specifically designed to evaluate marketing effectiveness rather than creative quality. Even in that context, with rigorous submission criteria and experienced judges, attribution remains the hardest problem in the room. Entrants with genuinely strong evidence of causation stand out precisely because they are rare. Most entries show correlation and hope the judges do not push too hard on the mechanics.
When you read an agency case study, ask: what else was happening in this client’s business during this period? If the case study does not address that question, it has not proven anything. It has shown you a number that went up.
What a Credible Case Study Actually Contains
There is a structural difference between case studies built to impress and case studies built to inform. The ones worth taking seriously tend to share a few characteristics that are not particularly glamorous but are commercially meaningful.
The brief comes first. Before any agency talks about what they did or what happened, a credible case study explains what the client needed and why. What was the business problem? What was the market context? What had already been tried? If a case study starts with the creative concept or the media strategy without establishing the problem it was solving, that is a red flag. It suggests the work was built around what the agency wanted to do rather than what the client needed.
The measurement framework is defined in advance. One of the cleaner signals of a genuine effectiveness story is that the success metrics were agreed before the campaign ran, not selected afterwards because they looked good. Any agency can find a number that improved during a campaign period. Far fewer can show you the metric they committed to at the start and demonstrate movement against it.
The result is specific and bounded. “Significant uplift in brand awareness” tells you almost nothing. “23-point increase in unaided brand recall among 25-to-44-year-old women in the southeast, measured via pre and post survey” tells you something. Specificity is not a guarantee of accuracy, but vagueness is almost always a sign that the results were either weak or unmeasured properly.
There is acknowledgment of what did not work. This is the rarest element in any agency case study and the most valuable signal of intellectual honesty. Every campaign has something that underperformed, a channel that did not deliver, a creative execution that missed, a targeting assumption that turned out to be wrong. Agencies that include this information in their case studies are telling you something important about how they think and how they will behave when things go sideways on your account.
How Agencies Use Case Studies in the Pitch Process
I have sat in enough new business pitches from both sides of the table to know how case studies function in that environment. They are not evidence in the scientific sense. They are social proof. They are designed to reduce the anxiety of the person making the buying decision, to give them something to point to if the hire goes wrong: “Look, they had done this before.”
That is a legitimate function. Reducing perceived risk is a real commercial service. But it means you should read case studies the way you read testimonials, with appropriate skepticism and a clear understanding of what they can and cannot tell you.
The most useful thing you can do with a case study in a pitch context is treat it as a starting point for a conversation rather than a conclusion. Ask the agency to walk you through the brief they received. Ask what the client was measuring before the campaign started. Ask what they would do differently if they ran it again. The answers to those questions will tell you far more about the agency’s thinking than the polished document they handed you.
Personalization in the pitch process is worth considering here too. Unbounce has written about how agencies can use personalization to win new business, and the underlying principle applies to case study selection as much as anything else. Showing a prospective client a case study from a completely different category, scale, and business model because it has impressive numbers is less persuasive than showing something that maps closely to their actual situation, even if the numbers are less dramatic.
The Difference Between Creative Awards and Effectiveness Evidence
This distinction matters more than the industry tends to acknowledge. Creative awards celebrate the quality of the work. Effectiveness awards celebrate the impact of the work on the business. These are related but not the same thing, and conflating them is one of the more persistent habits in agency marketing.
I have seen agencies lead their credentials with Cannes Lions and D&AD wins for work that, when you looked at the actual client outcomes, had not moved the commercial needle in any meaningful way. The work was genuinely excellent. It was also largely irrelevant to whether that agency could solve a specific business problem for a new client.
Conversely, some of the most commercially effective campaigns I have seen in my career were not particularly glamorous. They were rigorous, well-targeted, and built around a clear understanding of the customer decision process. They would not have won awards. They would have renewed contracts.
When evaluating an agency case study that leads with award recognition, the question to ask is: what was the commercial outcome for the client? If that information is absent or buried, the case study is telling you what the agency values, not necessarily what it can deliver.
Writing Case Studies That Actually Build Trust
If you are an agency owner or a marketing leader responsible for producing case studies, the temptation is to optimise for impressiveness. Bigger numbers, cleaner narrative, more dramatic transformation. That approach works in the short term. It falls apart in the sales process when a sophisticated buyer starts asking hard questions.
The more durable approach is to write case studies that are honest about complexity. That does not mean leading with your failures. It means giving the reader enough context to understand what actually happened and why, including the parts that were hard.
For content-led agencies, Buffer’s writing on running a content agency touches on the importance of building genuine credibility through transparency rather than polish. The same principle applies to how you document client work. Readers, especially experienced buyers, can sense when a document has been engineered rather than written honestly.
A few structural choices make case studies more credible without making them less compelling. State the original brief explicitly, including any constraints. Name the metrics that were agreed in advance. Show the timeline honestly, including any pivots or adjustments made during the campaign. Quantify the outcome against the original objective rather than against a different metric that happened to perform better. And if the client is willing, include a direct quote that addresses the business outcome rather than the quality of the relationship.
Agencies that operate with this level of transparency in their case studies tend to attract clients who are a better fit, because the clients who value honesty are the ones who will have a productive working relationship. The ones who want to be impressed by big numbers and clean narratives are often the ones who will blame the agency when reality turns out to be more complicated than the pitch suggested.
What Prospective Clients Should Ask When Reading Agency Case Studies
There is a set of questions that cuts through most of the noise in agency case studies, and they are not particularly complicated. They just require the discipline to ask them rather than accepting the document at face value.
What was the client’s situation before the agency got involved? Not the sanitised version in the case study, but the real commercial context. Was the brand growing or declining? Was the category expanding or contracting? Was there a new product launch or a pricing change happening at the same time? These factors matter enormously for interpreting any result.
How was success defined before the campaign started? If the agency cannot answer this clearly, the metrics in the case study were probably selected retrospectively. That is not fraud, but it is not evidence either.
What happened after the campaign ended? Short-term spikes in traffic, leads, or sales are interesting but not conclusive. What matters is whether the underlying business metrics improved on a sustained basis. Agencies rarely follow up their case studies with twelve-month post-campaign data, but it is worth asking for.
Can you speak to the client directly? A case study is a curated document. A conversation with the actual client is a different kind of evidence. Most agencies will facilitate this for serious prospects, and the ones who are reluctant to do so are often protecting a narrative that would not survive direct scrutiny.
For agencies building out their digital presence and thought leadership alongside case studies, tools like Later’s platform for agencies and freelancers can support the broader content distribution process. The case study itself is only as useful as the audience that sees it.
The Role of Case Studies in Longer-Term Agency Credibility
There is a longer game here that agencies often miss. Case studies are not just new business tools. They are a public record of how an agency thinks, what it values, and how it behaves when things get difficult. Over time, the body of case studies an agency publishes tells a story about its strategic orientation that goes beyond any individual piece of work.
Agencies that consistently publish case studies focused on business outcomes rather than creative achievement signal something to the market. They attract clients who care about commercial results. They build a reputation for effectiveness rather than just quality. That reputation compounds over time in a way that award wins and impressive numbers do not.
I spent years growing an agency from a position of commercial weakness, and one of the things that changed the trajectory was being more honest in how we talked about client work, including the cases where we had to pivot, the campaigns that underdelivered against the original forecast, and the strategic calls that turned out to be wrong. That honesty was uncomfortable at first. Over time, it became one of the clearest differentiators we had, because almost no other agency was doing it.
There is also a practical content dimension to this. Agencies that invest in genuinely useful, transparent content around their work tend to build organic search presence and thought leadership that supports new business in ways that paid promotion cannot replicate. Copyblogger’s perspective on content-led marketing is relevant here: the content that builds long-term credibility is the content that is genuinely useful, not the content that is designed to impress.
For a wider view of how agencies can build sustainable commercial practices, the Agency Growth and Sales section of The Marketing Juice covers the operational and strategic levers that separate agencies that scale from those that stall, including how positioning, pricing, and client management interact with the new business process.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
