Decision-Based Evidence Making: Why Marketers Find the Data They’re Looking For
Decision-based evidence making is the practice of reaching a conclusion first, then selecting data to support it. It is the reverse of evidence-based decision making, and it is far more common in marketing than most practitioners would like to admit. The result is strategy that feels rigorous because it is wrapped in numbers, but is in fact just opinion wearing a spreadsheet.
Understanding how this happens, and why it is so hard to catch in the moment, is one of the more useful things a senior marketer can do. It is not a failure of intelligence. It is a failure of process, and often of incentive structures.
Key Takeaways
- Decision-based evidence making starts with a conclusion and works backwards through data, producing analysis that looks objective but is not.
- Confirmation bias is the cognitive engine behind it, and it operates below the level of conscious awareness in most cases.
- Organisational incentives, particularly around budget approval and performance reporting, are the structural conditions that make it worse.
- The antidote is not more data. It is a deliberate process that separates hypothesis formation from data selection.
- Marketers who learn to spot this pattern in their own work produce better strategies and build more credibility with commercial stakeholders.
In This Article
- What Does Decision-Based Evidence Making Actually Look Like?
- Why Confirmation Bias Is Only Part of the Story
- The Metrics Selection Problem
- How Budget Cycles Make It Worse
- The Agency Side of the Problem
- How to Catch It in Your Own Work
- What This Has to Do With Buyer Psychology
- The Effie Problem: When Industry Validation Reinforces the Pattern
- Building a Team That Catches This
What Does Decision-Based Evidence Making Actually Look Like?
It rarely looks like dishonesty. That is what makes it so persistent. It usually looks like enthusiasm, like a team that has done its homework, like a well-structured recommendation deck with charts and footnotes. The tell is in the sequencing: the recommendation was written before the analysis was complete, and the analysis was shaped to fit the recommendation.
I have sat in enough agency presentations and client board meetings to recognise the pattern. Someone in the room already knows what they want to do. The research phase was real, but it was not neutral. Metrics were selected because they moved in the right direction. Competitor examples were chosen because they supported the case. Data that pointed the other way was described as “not directly comparable” or quietly left out of the appendix.
The conclusion lands with confidence. The charts are clean. And because no single data point is fabricated, the whole thing has a kind of plausible deniability. Nobody lied. But the process was not honest either.
This is the territory that sits between deliberate manipulation and genuine analysis. It is where most bad marketing strategy lives.
Why Confirmation Bias Is Only Part of the Story
Confirmation bias is the cognitive mechanism most often cited here, and it is relevant. People naturally seek out information that confirms what they already believe, and they apply more scrutiny to information that contradicts it. That is well-documented human behaviour, not a character flaw specific to marketers.
But blaming confirmation bias explains the symptom without explaining why the conditions exist in the first place. In most organisations, the conditions are structural. Budget approval processes reward confident recommendations over honest uncertainty. Performance reviews reward outcomes over process quality. Clients, in my experience, often want to be told that their instinct was right. They are paying for validation as much as they are paying for analysis.
When I was running an agency and we were growing fast, there was always pressure to win pitches and keep clients happy. The temptation to shape a narrative around what the client wanted to hear was real. The agencies that gave in to it consistently produced work that looked great in presentations and underperformed in market. The ones that pushed back, that said “the data does not support this channel mix” or “your attribution model is flattering the wrong touchpoints,” had harder conversations but built better long-term relationships.
Confirmation bias is the cognitive engine. Incentive structures are the fuel. You cannot fix the first without addressing the second.
If you are thinking about how this connects to broader patterns in how buyers and decision-makers process information, the Persuasion and Buyer Psychology hub covers the cognitive shortcuts and behavioural patterns that shape how people make choices, including the ones that happen inside marketing teams, not just in the minds of customers.
The Metrics Selection Problem
One of the clearest expressions of decision-based evidence making is selective metric reporting. Not fraud. Selection. You choose to report the metrics that tell the story you want to tell, and you frame the ones that do not as secondary, contextual, or outside the scope of this particular analysis.
I have seen this play out in every category I have worked in. A paid social campaign is underperforming on revenue contribution, so the report leads with reach and engagement. An SEO programme is not driving conversions, so the update focuses on ranking improvements and organic traffic volume. A brand campaign cannot demonstrate any measurable commercial effect, so the post-campaign analysis emphasises awareness uplift and brand recall scores.
None of those metrics are worthless. Reach matters. Rankings matter. Awareness matters. But when they are selected specifically because the primary commercial metric is moving in the wrong direction, the reporting is not informing a decision. It is protecting one.
The discipline of agreeing on success metrics before a campaign launches, and committing to reporting on them regardless of outcome, is one of the simplest structural fixes available. It is also one of the most commonly skipped, because it removes the flexibility to reframe results after the fact.
Building credibility with commercial stakeholders depends heavily on this kind of consistency. Trust signals in a marketing context are not just about what you say to customers. They are about how reliably your internal reporting reflects reality.
How Budget Cycles Make It Worse
Annual budget planning is one of the most reliable generators of decision-based evidence making in large organisations. The sequence is almost always the same. A team decides what budget it wants to protect or grow. It then constructs the evidence case to justify that number. The analysis is not independent of the conclusion. It is built around it.
I spent time working with businesses where the marketing function had to fight for budget against other commercial priorities. The teams that presented honest assessments of what had worked, what had not, and where the genuine opportunity lay, were often less successful in budget rounds than the teams that presented polished narratives with optimistic projections. That is a system problem, not a people problem.
When the reward for honesty is a smaller budget and the reward for a well-constructed case is a larger one, the incentive to reverse-engineer your evidence is significant. Fixing this requires senior leadership to actively signal that they value honest analysis over confident-sounding projections. That is harder than it sounds, because confident-sounding projections feel like decisive leadership in the room, and honest uncertainty can feel like weakness.
The marketers I have most respected over two decades were the ones who could say “we do not have enough data to be confident about this, and here is what we would need to find out.” That is not hedging. That is intellectual honesty, and it is genuinely rare.
The Agency Side of the Problem
Agencies have their own version of this, and it is worth being direct about it. When an agency has already committed to a strategic direction, either because it has been sold to the client or because it aligns with the agency’s capabilities, the temptation to find evidence that supports it is significant.
I have been on both sides of this. Early in my career I watched agencies build cases for channels and tactics that happened to be the ones they were best at delivering, or the ones with the highest margin. The research supported the recommendation because the recommendation came first. The client got a confident presentation and a strategy shaped more by agency economics than by genuine analysis.
Later, when I was running an agency myself, I made a point of separating the strategy function from the sales function as much as possible. The people doing the analysis should not be the people whose revenue depends on a particular outcome. That separation is not always commercially viable for smaller shops, but the awareness of the conflict is the minimum requirement. If you know your objectivity is compromised, you have an obligation to flag it.
The hardest version of this I faced was on a project that had been sold at roughly half the budget it needed to deliver properly. The client had a business logic problem: they had not defined what they actually needed the work to do, and neither had the team that sold it. When I went back to the client and told them we were going to walk away rather than deliver something inadequate, the conversation was uncomfortable. But it was honest. The alternative was to keep finding evidence that the project was on track while knowing it was not. That is decision-based evidence making in its most corrosive form: the evidence that everything is fine, constructed to avoid a difficult conversation.
How to Catch It in Your Own Work
The difficulty with decision-based evidence making is that it does not feel like bias when you are doing it. It feels like building a case. It feels like doing the work. The cognitive experience of selecting supporting evidence and the cognitive experience of conducting genuine analysis are not obviously different from the inside.
A few practical tests that help:
Write down your hypothesis before you look at the data. If you cannot articulate what you expect to find before you start the analysis, you are not forming a hypothesis. You are fishing for confirmation. Writing it down forces you to commit to a position that the data can then genuinely test.
Actively look for the data that would change your mind. Not the data that supports your case. The data that would make you recommend something different. If you cannot identify what that data would look like, your analysis is not falsifiable, and unfalsifiable analysis is not analysis.
Assign someone the role of challenger. Not devil’s advocate as a performative exercise, but a genuine brief to find the strongest case against the recommendation. This works best when the challenger is not the most junior person in the room, because seniority dynamics suppress honest dissent.
Audit your metric selection. List every metric that is relevant to the question you are trying to answer. Then ask which ones you included in your analysis and which ones you left out. If the ones you left out consistently moved in the wrong direction, you have a selection problem.
Separate the person who forms the hypothesis from the person who selects the evidence. Even informally. Even just by asking a colleague to pull the data before you tell them what you are looking for. The separation creates enough distance to reduce the most obvious forms of motivated reasoning.
None of these are complicated. The barrier is not capability. It is the willingness to slow down a process that organisational culture usually rewards for moving fast.
What This Has to Do With Buyer Psychology
Decision-based evidence making is not just an internal strategy problem. It has a direct effect on how marketing communicates with buyers, and it is worth understanding why.
When a marketing team has built its strategy on reverse-engineered evidence, the messaging that results tends to reflect the team’s preferred narrative rather than the buyer’s actual concerns. The product benefits that get emphasised are the ones the team is most confident about, not necessarily the ones buyers find most compelling. The objections that get addressed are the ones the team finds easiest to handle, not the ones buyers are most likely to have.
Buyers are not passive recipients of messaging. They apply their own evaluation process, and part of that process involves assessing whether the claims being made are credible. Trust signals matter because buyers are doing their own version of evidence assessment, and they are reasonably good at detecting when a case has been constructed rather than discovered.
The connection runs deeper than that. Social proof works as a persuasion mechanism precisely because it provides evidence that is not controlled by the seller. Third-party validation, customer reviews, and peer behaviour all carry more weight than brand claims because they are harder to reverse-engineer. Buyers trust them more because they have less reason to suspect motivated selection.
When marketing teams understand decision-based evidence making, they become better at recognising why certain persuasion tactics work and others do not. The tactics that work are the ones that feel genuinely evidenced to the buyer. The ones that fail are often the ones where the selection process is visible, where the claim feels constructed to support a conclusion rather than drawn from honest observation.
Social proof in conversion contexts is a good illustration of this: it works when it is specific, varied, and credible, and it fails when it is generic, uniform, or obviously curated to present only the most favourable picture. The psychology of buyer trust and the psychology of internal evidence selection are more closely related than most marketing teams recognise.
There is a broader body of thinking on how buyers process evidence and make decisions under uncertainty. The Persuasion and Buyer Psychology hub pulls together the most commercially relevant threads, including how cognitive biases shape evaluation, how trust is built and lost, and why the same evidence lands differently depending on how it is framed and who presents it.
The Effie Problem: When Industry Validation Reinforces the Pattern
I have judged the Effie Awards, which are among the most rigorous effectiveness awards in the industry because they require entrants to demonstrate business outcomes, not just creative quality. The standard is higher than most award schemes. But even there, the entries that struggle are often the ones where the evidence has been assembled to support a narrative rather than to honestly test one.
You can usually tell. The metrics shift between the campaign objective section and the results section. The baseline is defined in a way that makes the uplift look larger. The counterfactual is ignored. The attribution is convenient. None of it is fabricated, but the selection is doing a lot of work.
The broader industry has a version of this problem at scale. Case studies, award entries, and conference presentations all create incentives to present the most favourable interpretation of results. Over time, this shapes what the industry believes about what works. The evidence base for marketing effectiveness is partly constructed from a corpus of selectively reported outcomes. That is worth being clear-eyed about.
It does not mean the industry’s accumulated knowledge is worthless. It means it should be read critically, with an awareness of the selection mechanisms that shaped it. The campaigns that did not work are not in the case study library. The strategies that were abandoned are not on the conference agenda. What gets reported is a filtered version of reality, and the filter is not neutral.
Building a Team That Catches This
The individual practices matter, but the team environment matters more. A culture where honest analysis is rewarded, where saying “I was wrong about this” is treated as a sign of rigour rather than weakness, where uncertainty is expressed rather than hidden, is a culture that produces better strategy over time.
When I was growing a team from around twenty people to close to a hundred, the hires that made the biggest difference were not the ones with the most impressive credentials. They were the ones who could hold a position under pressure, change it when the evidence genuinely shifted, and tell the difference between those two situations. That is a rare combination. Most people either cave too easily or dig in too hard. The ones who could do both, and knew which was appropriate, were the ones who made the analysis function trustworthy.
Creating that environment requires explicit permission from leadership. If the most senior person in the room always arrives with a conclusion and uses the team to validate it, the team will learn to produce validation. If the most senior person models genuine uncertainty, asks questions they do not know the answer to, and changes their position when the evidence warrants it, the team will learn to do the same.
This is not a soft skills point. It is a commercial one. Teams that produce honest analysis make better decisions. Better decisions produce better outcomes. The connection is direct, even if it takes longer to see than a single campaign cycle.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
