Decision-Based Evidence Making: Why Marketers Find the Data They’re Looking For
Decision-based evidence making is the practice of reaching a conclusion first, then selecting data to support it. It is the reverse of evidence-based decision making, and it is far more common in marketing than anyone in the industry likes to admit. Most teams do not set out to cherry-pick. They do it instinctively, under pressure, with a narrative already forming before the analysis begins.
The result is not just bad strategy. It is strategy that feels rigorous because it has charts behind it, which makes it harder to challenge and more expensive when it fails.
Key Takeaways
- Decision-based evidence making starts with a conclusion and works backwards through data, which is the opposite of how good analysis should function.
- The problem is not usually dishonesty. It is the way pressure, sunk costs, and cognitive bias cause teams to unconsciously filter what they look at.
- Dashboards and reporting tools make this worse, not better, because they surface data selectively and reward confirmation.
- The structural fix is separating the people who make decisions from the people who build the case for them, at least temporarily.
- Asking “what would change our mind?” before a decision is made is one of the most commercially useful questions a senior marketer can put to a room.
In This Article
- Why This Happens More Than It Should
- What Decision-Based Evidence Making Looks Like in Practice
- How Dashboards and Reporting Tools Make It Worse
- The Sunk Cost Problem
- Why Senior Marketers Are Not Immune
- The Credibility Cost Nobody Talks About
- How to Build Structural Checks Against It
- The Relationship Between Motivated Reasoning and Buyer Behaviour
- What Good Looks Like
Why This Happens More Than It Should
I spent years running agency teams where new business had already made promises before the strategy team was in the room. A campaign concept would be sold to a client, a media approach committed to, a timeline set, and then the brief would land on the planning team’s desk with an implicit instruction: make this work. The analysis that followed was rarely neutral. It was designed, consciously or not, to validate a decision that had already been made.
That is not a character flaw. It is a structural one. When commercial incentives, client relationships, and internal politics are all pointing in one direction, the data tends to follow. The question is whether you build systems that correct for that, or whether you let it run.
Most organisations let it run. And the reason is simple: decision-based evidence making is comfortable. It produces confident-sounding recommendations quickly. It avoids the awkward conversation where someone says the data does not actually support what the room has already agreed to do.
If you want to understand how buyers think and how rational decision-making breaks down under pressure, the broader context is worth reading. The Persuasion and Buyer Psychology hub covers the cognitive and commercial mechanics behind how people process information and make choices, including the biases that make this kind of reverse engineering so easy to fall into.
What Decision-Based Evidence Making Looks Like in Practice
It rarely looks like fraud. It looks like enthusiasm. It looks like a senior leader who is convinced about a channel, a creative direction, or a market opportunity, and who frames every subsequent conversation around confirming that conviction. It looks like a planning team that has three weeks to turn around a strategy document and gravitates toward the data points that tell a clean story. It looks like a post-campaign review that emphasises reach and impressions because the conversion numbers are not worth presenting.
I have been in rooms where a campaign has clearly underperformed, and the debrief has been structured entirely around the metrics that moved in the right direction. Not because anyone was trying to deceive the client, but because no one wanted to be the person who said it plainly. The narrative was set before the slides were built.
There are a few patterns worth naming specifically.
Metric selection bias. Choosing which KPIs to report based on which ones support the narrative. If cost per acquisition is up but click-through rate is up too, you lead with click-through rate. This is not lying. But it is not honest analysis either.
Time window manipulation. Pulling data from the period where performance looks best. A campaign that performed well in week two but declined through weeks three to six becomes “a strong opening with optimisation opportunities” rather than “a campaign that did not hold.”
Benchmark shopping. Comparing results against the benchmark that makes the outcome look most favourable. Industry average, previous campaign average, or a competitor’s publicly reported figure, whichever one makes the current number look acceptable.
Correlation as causation. “Sales went up in the same quarter we ran the campaign” is not evidence the campaign drove sales. But it gets used as evidence constantly, especially when there is pressure to demonstrate ROI.
How Dashboards and Reporting Tools Make It Worse
There is a widely held assumption in marketing that more data leads to better decisions. I am not sure that is true. What more data often leads to is more sophisticated rationalisation.
Modern reporting tools give you an enormous amount of flexibility in how you slice and present data. That flexibility is genuinely useful when you are trying to understand what happened. It is dangerous when you already think you know what happened and you are looking for the view that confirms it. The psychology of decision-making is well-documented: people are not neutral processors of information. They weight evidence that confirms existing beliefs more heavily than evidence that challenges them, and they do this without realising it.
When I was building out the analytics capability at iProspect, one of the things I pushed for was separating the people who ran campaigns from the people who reported on them, at least for significant reviews. Not because I thought account teams were dishonest. But because I knew that someone who has spent six months building and running a campaign is not the most neutral person to assess whether it worked. They have a stake in the answer. The data they surface will reflect that stake, even unconsciously.
That structural separation is not always possible in smaller teams. But the principle matters: the person with the most to lose from a negative finding should not be the only person interpreting the data.
The Sunk Cost Problem
One of the most reliable triggers for decision-based evidence making is sunk cost. Once a significant budget has been committed, a vendor relationship established, or a campaign launched, the psychological pressure to justify that commitment becomes enormous. The analysis that follows is rarely impartial.
I saw this play out on a project that had been sold into a client at roughly half the budget it actually needed. By the time I was brought in to assess the situation, the agency had already spent months trying to make it work at the wrong price point, and the client had already invested significant time and internal political capital in championing the project. Neither side wanted to acknowledge the obvious. The evidence being presented to leadership on both sides had been filtered through that shared reluctance.
The honest conversation, which eventually had to happen, was that the project was loss-making, the original scope was wrong, and continuing on the current terms would make it worse for everyone. That conversation was uncomfortable. But it was also the only one that was grounded in what the data actually showed, rather than what people needed it to show.
Sunk cost thinking is a well-documented feature of human decision-making. The amount already spent becomes a reason to keep spending, even when the rational case for stopping is clear. In marketing, this shows up in channel decisions, agency relationships, technology investments, and creative directions that have not worked but that no one is willing to call.
Why Senior Marketers Are Not Immune
There is a temptation to frame this as a junior team problem. The reality is that seniority makes it worse in some respects, not better. Senior leaders have more authority to set the narrative, more relationships to protect, and more professional identity invested in their past decisions. When a CMO has publicly backed a strategy, the analysis that follows tends to support that strategy. Not because the CMO is corrupt, but because the people around them are reading the room.
I have judged the Effie Awards, which means I have seen how the industry presents its best work when it is trying to make a case for effectiveness. Even in that context, where the explicit goal is rigorous evaluation, you see submissions that are built backwards from a result. The creative work is strong, the outcome metrics are positive, and the causal logic connecting them is thinner than the submission implies. Judges who know what to look for can spot it. Most clients cannot.
The issue is not that marketers are uniquely prone to self-deception. It is that the industry has built very few structural checks against it. There is no external audit of campaign effectiveness claims. There is no standard methodology for attributing outcomes to marketing activity. There is enormous commercial pressure to tell a good story, and very little accountability when the story turns out to be wrong.
Understanding how persuasion works at a psychological level is useful here, because the same mechanisms that make buyers susceptible to motivated reasoning also make marketers susceptible to it. Confirmation bias, authority bias, and the tendency to weight recent evidence more heavily than historical patterns are not client-side problems. They operate inside marketing teams too.
The Credibility Cost Nobody Talks About
There is a short-term logic to decision-based evidence making. It keeps clients happy. It protects internal relationships. It avoids difficult conversations. But there is a credibility cost that accumulates quietly over time, and it tends to surface at the worst possible moment.
When clients eventually notice that the data always seems to tell the same story, they stop trusting the analysis. When internal stakeholders realise that the evidence for a decision was assembled after the decision was made, they stop engaging seriously with the planning process. The room goes through the motions, the slides get presented, and everyone knows the conclusion was never really in question.
That erosion of trust is hard to recover from. Trust signals matter in every commercial relationship, and in a marketing or agency context, analytical credibility is one of the most important ones. Once stakeholders believe that the data is being shaped to fit the narrative, they start applying their own filters to everything you present. You lose the ability to make a genuinely evidence-based case, even when you have one.
I watched this happen with a client relationship that had been managed through optimistic reporting for about eighteen months. When performance genuinely deteriorated and the team needed the client to trust the diagnosis, there was no credibility left to draw on. Every piece of analysis was viewed with suspicion, because the client had learned, correctly, that the analysis had been shaped by what the agency wanted to be true.
How to Build Structural Checks Against It
The solution is not to tell people to be more objective. Objectivity is not a personality trait you can instruct into existence. It is a structural outcome that requires deliberate design.
Define success criteria before the campaign launches, not after. If you agree in advance what good looks like, you remove the ability to redefine success once the results are in. This sounds obvious. It is not standard practice. Most teams define KPIs loosely enough that the goalposts can be moved without anyone technically lying.
Ask “what would change our mind?” at the start of every significant decision. This is a pre-mortem technique that forces the room to identify the conditions under which the current hypothesis would be wrong. If no one can answer that question, the decision is not evidence-based. It is preference-based with data attached.
Separate analysis from advocacy. The person making the case for a decision should not be the same person building the analytical case for it, at least on decisions of significant commercial consequence. This is standard practice in investment and legal contexts. It is almost unknown in marketing.
Report the metrics that were agreed upfront, not the ones that moved in the right direction. This requires discipline, especially when the agreed metrics are not flattering. But it is the only way to build a reporting relationship that has any analytical integrity.
Create space for the uncomfortable interpretation. In most team environments, the person who says “I think this data is telling a different story” is taking a social risk. If that risk is not explicitly mitigated, people will not take it. The most analytically rigorous teams I have worked with are the ones where challenging the prevailing narrative is treated as a contribution, not a problem.
Building genuine trust with clients and internal stakeholders also requires being honest about what the data cannot tell you. Credibility is built through consistency, and consistency means presenting inconvenient findings with the same clarity as convenient ones. That is harder than it sounds when commercial relationships are involved.
The Relationship Between Motivated Reasoning and Buyer Behaviour
There is a reason this topic sits within buyer psychology. Decision-based evidence making is not just an internal team problem. It shapes how marketing is built and presented to buyers, and buyers are often on the receiving end of it.
When a brand selects the testimonials that support its claims, presents case studies from its best-performing clients, and builds social proof around its most favourable outcomes, it is doing a version of the same thing. That is not inherently dishonest. But it does mean that buyers are routinely presented with evidence that has been curated to reach a predetermined conclusion. The mechanics of social proof depend on this curation. The question is whether the curation is transparent enough that buyers can calibrate accordingly.
Buyers who are sophisticated about this, particularly in B2B contexts, are increasingly resistant to evidence that looks too clean. They have seen enough vendor case studies to know that the methodology behind them is not always neutral. The marketing teams that build credibility are the ones that present evidence with appropriate caveats, acknowledge where results were context-dependent, and do not overstate what the data supports.
That kind of intellectual honesty is commercially counterintuitive. It feels like giving ground. In practice, it tends to build more durable trust than a perfectly curated deck, because it signals that you are not managing the narrative. That signal matters more than most marketing teams realise. The broader principles around how buyers process credibility and make decisions are covered in depth in the Persuasion and Buyer Psychology hub, which is worth working through if you are thinking about how to build genuine rather than manufactured authority.
What Good Looks Like
Good analytical practice in marketing does not mean being pessimistic or refusing to tell a story. It means being honest about what the evidence supports and what it does not, and building the confidence to say so even when the room would prefer a cleaner narrative.
The teams I have seen do this well share a few characteristics. They are comfortable with ambiguity. They do not treat uncertainty as a communication problem to be managed. They present ranges rather than point estimates where the data warrants it. They distinguish between what happened, what probably caused it, and what they think should happen next, and they are clear about which of those three things they are doing at any given moment.
That clarity is not just analytically better. It is commercially better. Clients and stakeholders who trust the analysis are more willing to act on it, more willing to have difficult conversations based on it, and more willing to give the team the benefit of the doubt when the picture is complicated. That is worth more than a deck that always tells a good story.
Marketing does not need perfect measurement. It needs honest approximation and the discipline not to mistake a convenient number for a true one. That distinction is where most of the real analytical work happens, and it is where most teams, under pressure, quietly stop doing it.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
