Data-Driven Decisions Are Harder Than They Look

Adopting data-driven decision making sounds straightforward until you try to do it properly. Most businesses have more data than they know what to do with, a handful of dashboards that nobody fully trusts, and a gap between what the numbers say and what actually gets decided. The challenges are rarely technical. They are cultural, structural, and sometimes just a matter of honesty about what the data is actually telling you.

The businesses that get this right are not the ones with the most sophisticated tools. They are the ones that have learned to ask better questions before they look at the data, and harder questions after.

Key Takeaways

  • Most data adoption failures are cultural and structural, not technical. The tools are rarely the problem.
  • Bad measurement doesn’t just produce wrong answers. It produces confident wrong answers, which is worse.
  • Data and persuasion are more connected than most marketers admit. How findings are framed internally determines whether they change behaviour.
  • The gap between data collection and data-informed decisions is where most organisations actually fail.
  • Fixing measurement often exposes how little certain marketing activities were contributing. That discomfort is a feature, not a bug.

There is a broader context worth establishing here. Data-driven decision making sits inside a wider discipline of understanding how people actually behave, what motivates them, and why they buy. If you want to go deeper on that, the Persuasion and Buyer Psychology hub covers the psychological frameworks that sit underneath the numbers, because data tells you what happened, not why it happened or what to do next.

Why Does Data-Driven Decision Making Fail Before It Starts?

The most common failure mode is not a lack of data. It is a lack of agreement on what the data is supposed to answer. I have sat in too many planning sessions where a business has invested in a new analytics platform, hired a data analyst, and built a reporting suite, and then continued making decisions the same way they always did. The data existed in a parallel universe to the actual decision-making process.

Part of this is structural. When I ran agencies, I noticed that the businesses with the clearest data cultures were the ones where someone senior had defined, in advance, what a good outcome looked like. Not in vague terms like “improve engagement” but in specific, measurable terms tied to revenue or margin or customer lifetime value. Without that anchor, data becomes decorative. It gets used to support decisions that have already been made, not to inform decisions that are yet to be made.

The other part is honesty. If businesses could retrospectively measure the true impact of their marketing on business performance, many would be uncomfortable with what they found. A lot of activity that feels productive, that generates reports and presentations and internal approval, makes very little actual difference. Fixing measurement surfaces that reality. Which is precisely why many organisations resist fixing it.

What Role Does Organisational Culture Play?

Culture is where most data initiatives quietly die. Not in the technology selection phase, not in the implementation, but in the moment when the data says something that contradicts what a senior person believes, and the senior person wins anyway.

This is not always arrogance. Sometimes it is experience. A marketing director with fifteen years in a category may have a genuinely better read on a situation than a dataset built over six months. The problem is that this same dynamic gets used to dismiss data that is inconvenient, not just data that is incomplete. The skill is knowing the difference, and most organisations have not built the internal language to have that conversation without it becoming political.

Understanding how cognitive biases shape decision-making is directly relevant here. Confirmation bias, in particular, is rampant in data-heavy environments. People do not look at data neutrally. They look at data looking for confirmation. A good data culture builds in friction against that tendency, through peer review, through pre-registration of hypotheses, through genuine separation between the person who builds the analysis and the person who interprets it.

There is also a status dimension. In many organisations, the people who have historically made decisions by instinct are the most senior and the most successful. Shifting to data-driven processes implicitly challenges their authority. That is a change management problem, not a data problem. And it requires the kind of internal persuasion that most data teams are not trained to do.

How Does Poor Data Quality Undermine Decision Making?

Data quality is the unglamorous problem that sits underneath almost every other challenge. You can have the best visualisation tools, the most capable analysts, and a leadership team genuinely committed to evidence-based decisions, and still get it badly wrong if the underlying data is unreliable.

I worked on a project early in my agency career where a client was making significant budget allocation decisions based on a CRM that had not been properly maintained for three years. Contact records were duplicated, attribution was broken, and the revenue figures being reported bore almost no relationship to what the finance team was seeing. Nobody had flagged it because nobody wanted to be the person who said the reporting infrastructure was compromised. The data looked authoritative. It was presented in polished dashboards. It was wrong.

This matters because bad data does not just produce wrong answers. It produces confident wrong answers. A business acting on instinct at least knows it is acting on instinct. A business acting on corrupted data thinks it has evidence. That is a more dangerous position.

The discipline of understanding propensity to buy is a good example of where data quality problems compound quickly. Propensity modelling depends on clean, consistent behavioural data. Feed it inconsistent inputs and the model produces outputs that look precise but are built on sand. The number has decimal places. It has no reliability.

The fix is not always expensive. It is often just discipline. Consistent naming conventions, regular audits, clear ownership of data assets, and a willingness to say “we do not have reliable data on this” rather than using whatever is available because something is better than nothing. Sometimes nothing is better than something misleading.

Why Is the Gap Between Data and Action So Hard to Close?

Collecting data and acting on data are two entirely different organisational competencies. Many businesses have invested heavily in the first and almost nothing in the second.

The gap shows up in a few specific ways. Insight arrives too late to be useful, after the budget has been set or the campaign has launched. Findings are presented in technical language that decision-makers cannot engage with. Recommendations are framed as observations rather than actions, leaving the so-what entirely to the reader. Or the data points to a conclusion that requires someone to admit a previous decision was wrong, and the organisation’s culture makes that too costly to do openly.

This is where the difference between persuasion and argument becomes operationally relevant. Presenting data as an argument, here is the evidence, therefore this is the right answer, rarely changes behaviour in organisations with established power structures. Persuasion requires understanding what the decision-maker values, what risks they are trying to avoid, and how to frame the finding in terms that connect to those things. Data teams that understand this get their recommendations implemented. Data teams that do not spend a lot of time producing excellent work that nobody acts on.

The psychology of how decisions actually get made in organisations is worth understanding here. Formal decision processes and actual decision processes are rarely the same thing. Data that enters through the formal process often loses to intuition that operates through the informal one.

What Measurement Failures Are Most Costly?

Attribution is the measurement problem that costs businesses the most money without them realising it. When you cannot reliably connect marketing activity to business outcomes, budget allocation becomes a political process rather than an analytical one. The channels with the most internal advocates get the money, not the channels with the best evidence.

I spent years managing significant ad spend across multiple industries, and the pattern was consistent. Last-click attribution models systematically over-credited paid search and under-credited everything that happened earlier in the customer experience. Businesses were cutting brand investment because it did not show up in the attribution model, then wondering why their paid search efficiency was declining. The model was not measuring reality. It was measuring a simplified version of reality that happened to be easy to report.

The pharmaceutical sector faces a version of this that is particularly instructive. The regulatory environment forces a level of rigour around evidence and claims that most consumer marketers never have to apply. Looking at how social proof operates in pharmaceutical marketing illustrates what happens when an industry is forced to substantiate its claims rather than assert them. The standards are higher. The claims are more defensible. And the measurement has to be honest because the consequences of it not being honest are severe.

Most marketing operates without that external pressure. Which means the pressure has to come from inside, from leaders who are willing to ask whether the measurement is honest, not just whether it is positive. That is a harder cultural ask than it sounds.

How Do Skills Gaps Slow Down Data Adoption?

There is a persistent mismatch in most marketing teams between the data that is available and the skills required to interpret it properly. This is not just about technical skills, though those matter. It is about the analytical thinking required to ask the right questions of data before you start looking at it.

When I was building teams at iProspect, one of the things I noticed as we scaled from around 20 people to closer to 100 was that the most valuable hires were not always the most technically proficient. They were the people who could hold a business question in mind while working through an analysis, who could distinguish between a statistically interesting finding and a commercially meaningful one, and who could communicate the difference clearly to a client or a senior stakeholder.

That combination is genuinely rare. Technical data skills are teachable. Commercial judgement takes longer to develop. And the instinct to challenge your own findings, to ask whether the data could be telling a different story, is something that many organisations actively discourage because it slows things down and creates uncertainty.

Understanding consumer motivation and experiential buying behaviour is a good example of where analytical skill needs to be paired with human insight. The data can tell you what happened in a conversion funnel. It cannot tell you why a customer who looked motivated by price actually made a decision based on trust. That gap requires qualitative thinking alongside the quantitative, and most data-focused teams are not set up to do both.

When Does Data Adoption Create Internal Conflict?

One of the less-discussed challenges of moving to data-driven decision making is what it does to internal power dynamics. When decisions were made by experience and intuition, authority was distributed according to seniority and track record. When decisions are made by data, authority shifts toward whoever controls the data and the interpretation of it. That shift creates conflict.

I have seen this play out in client organisations where a relatively junior analytics team produced findings that directly challenged a strategy owned by a senior marketing director. The findings were correct. The strategy was underperforming. But the way the findings were presented, without any acknowledgment of the political reality of the situation, meant they were dismissed rather than acted on. The data was right. The approach was wrong.

This is where the distinction between coercion and persuasion becomes relevant in an internal context. Presenting data as an irrefutable verdict that demands a particular response is a form of coercion. It creates defensiveness. It triggers the very cognitive biases it is trying to overcome. Framing data as a contribution to a conversation, one perspective on a complex situation, tends to produce better outcomes even when the underlying conclusion is the same.

The most effective data cultures I have seen are ones where presenting evidence that challenges a previous decision is treated as a contribution rather than an attack. That norm has to be modelled from the top. If senior leaders respond to inconvenient data by questioning the methodology rather than engaging with the finding, everyone below them learns to present only comfortable data.

What Does Good Data-Driven Practice Actually Look Like?

Good data-driven practice is not about having more data or better tools. It is about building the habits and structures that connect evidence to decisions reliably and honestly.

That means defining what you are trying to measure before you measure it, not after. It means separating the people who build analyses from the people who interpret them, at least some of the time. It means having a clear escalation path for findings that challenge existing strategy, so they get heard rather than buried. And it means being willing to say, publicly and without embarrassment, “we were wrong about this and here is what the data is telling us instead.”

It also means being honest about the limits of the data you have. I have judged Effie Award entries where the measurement approach was clearly designed to tell a positive story rather than an accurate one. The metrics selected, the timeframes chosen, the baselines used, all of it pointed in one direction because the team had decided on the story first and built the measurement to support it. That is not data-driven decision making. That is data-assisted storytelling, which is a different thing entirely.

Trust signals matter in this context too. The credibility of data-driven recommendations depends on the credibility of the data itself. Building trust into how findings are presented is not just a communication issue. It is a structural one. When stakeholders trust the data, they act on it. When they do not, they find reasons not to.

The connection between data and human decision-making is also worth examining through the lens of how people actually process information under uncertainty. Research on decision-making consistently shows that people are not rational processors of evidence. They are pattern-matchers with emotional responses to uncertainty. Data-driven organisations that ignore this are building processes for a version of human behaviour that does not exist.

One practical implication: the most effective data presentations are not the most comprehensive ones. They are the ones that reduce complexity to the one or two things that actually matter for the decision at hand. More data, presented without editorial judgement, produces paralysis. Less data, presented with clear commercial context, produces decisions.

There is a deeper thread running through all of this that connects to how buyers and organisations process information and make choices. If you want to explore the psychological frameworks that explain why evidence does not always win the argument, the Persuasion and Buyer Psychology hub is a useful place to continue. The same principles that govern how customers respond to marketing govern how internal stakeholders respond to data. Human cognition does not change because the context is professional.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the biggest barrier to data-driven decision making in most businesses?
The most common barrier is cultural, not technical. Organisations that have historically made decisions by seniority and instinct often struggle to build the norms required for data to genuinely influence decisions, particularly when findings challenge existing strategies or the judgement of senior people. Technology and tools are rarely the limiting factor.
Why do organisations collect data but fail to act on it?
The gap between data collection and data-informed action usually comes down to three things: findings arrive too late to influence decisions that have already been made, insights are framed as observations rather than clear recommendations, or the data challenges a decision owned by someone with more internal authority than the analyst presenting it. Closing this gap requires both better process and better internal communication skills.
How does poor data quality affect marketing decisions?
Poor data quality does not just produce wrong answers. It produces confident wrong answers, which is more dangerous than acknowledged uncertainty. Decisions made on corrupted or inconsistent data feel evidence-based but are not. The most costly version of this is broken attribution, where budget allocation follows the measurement model rather than actual performance, systematically over-investing in channels that are easy to measure and under-investing in those that are not.
What skills do marketing teams need to use data effectively?
Technical data skills matter, but commercial judgement matters more. The most effective analysts are those who can hold a business question in mind while working through an analysis, distinguish between statistically interesting and commercially meaningful findings, and communicate clearly to non-technical decision-makers. The instinct to challenge your own findings, rather than present the most positive interpretation, is also critical and often undervalued.
How should data findings be presented to get them acted on?
Data findings are more likely to be acted on when they are framed in terms of the decision-maker’s priorities and risk concerns, rather than presented as irrefutable verdicts. Reducing complexity to the one or two things that actually matter for the decision at hand is more effective than comprehensive reporting. Understanding the difference between persuasion and argument, and applying that understanding internally, is one of the most underrated skills in data-driven organisations.

Similar Posts