Bad Data Doesn’t Just Mislead You. It Costs You.
Bad data is not a technical problem sitting in your IT department’s backlog. It is a business problem that compounds quietly, distorting every decision made above it. When the data feeding your marketing strategy is incomplete, inconsistent, or simply wrong, every plan built on top of it inherits those flaws, and most organisations never trace the failure back to its source.
Harvard Business Review has written extensively on data quality, and the core argument holds: organisations systematically underestimate the cost of bad data because the damage is diffuse. It shows up as missed targets, wasted budget, poor forecasting, and strategic decisions that feel reasonable at the time but produce baffling results. The data was wrong. The decisions were logical. The outcomes were expensive.
Key Takeaways
- Bad data costs organisations far more than the technical fix required to address it, because the damage accumulates across every function that relies on it.
- Most marketing teams are optimising against metrics that measure activity, not outcomes, which means good data quality on the wrong metrics still produces bad decisions.
- The organisations that fix measurement first tend to discover that a significant portion of their marketing spend was working much harder, or much less hard, than they assumed.
- Data quality is not a one-time audit. It degrades continuously, and most teams have no systematic process for catching that degradation before it distorts strategy.
- The real risk of bad data is not that you make one catastrophic decision. It is that you make dozens of small, confident, well-reasoned decisions that all point in the wrong direction.
In This Article
- Why Bad Data Is So Hard to Catch
- The Compounding Cost Nobody Budgets For
- When Good Data Quality Asks an Uncomfortable Question
- The Measurement Problem Underneath the Data Problem
- What Bad Data Does to Go-To-Market Strategy
- The Organisational Dimension of Data Quality
- Fixing the Problem Without Waiting for Perfect Data
Why Bad Data Is So Hard to Catch
The insidious thing about bad data is that it rarely announces itself. It sits inside dashboards that look authoritative. It populates reports that get presented in board meetings. It feeds the models that inform budget allocation. Nobody in the room is lying. They are reading numbers that are simply wrong, and the confidence with which those numbers are presented makes them harder to question, not easier.
I have sat in client meetings where the marketing team presented attribution data with complete conviction, attributing revenue growth to a channel that had, in reality, been double-counting conversions for six months. The numbers were internally consistent. The trend lines looked clean. The story made sense. It was only when we rebuilt the tracking from scratch that the picture changed entirely. The channel they were scaling aggressively was performing at roughly half the efficiency they believed. The channel they were considering cutting was carrying more weight than anyone had credited it with.
This is the pattern. Bad data does not produce obvious nonsense. It produces plausible nonsense, and plausible nonsense is far more dangerous because it gets acted on.
If you are thinking about how data quality connects to your broader go-to-market approach, the Go-To-Market and Growth Strategy hub covers the full picture, from market entry to performance measurement to how organisations build sustainable commercial momentum.
The Compounding Cost Nobody Budgets For
There is a tendency to think about bad data as a one-time problem with a one-time fix. Run an audit, clean the database, implement better governance, move on. That framing misses the structural issue. Data quality degrades continuously. Customer records go stale. Tracking breaks silently when pages are updated. CRM fields get used inconsistently as teams grow and onboarding slips. The problem is not a single contamination event. It is entropy, and entropy does not stop.
When I was running an agency and growing the team from around 20 people to over 100, one of the things that surprised me was how quickly data hygiene deteriorated as the organisation scaled. The processes that worked when five people touched a system started breaking down when fifty people did. Naming conventions drifted. Campaign tagging became inconsistent. Nobody was being careless deliberately, but the cumulative effect was that our reporting became progressively less reliable at exactly the moment when we needed it to be more reliable, because we were managing more clients, more spend, and more complexity.
The cost of that degradation is not just the time spent cleaning it up. It is the decisions made in the interim on data that was quietly misleading. Budget allocated to the wrong channels. Creative directions pursued based on performance signals that were artefacts of broken tracking rather than genuine audience response. Forecasts built on trend lines that did not mean what they appeared to mean.
Forrester’s work on intelligent growth models makes a related point: organisations that cannot accurately measure what is driving performance cannot reliably replicate it. Growth becomes harder to sustain not because the market changes, but because the organisation loses its ability to understand what it is doing that works.
When Good Data Quality Asks an Uncomfortable Question
One of the things I have observed across 20 years and dozens of clients is that organisations are often more comfortable with bad data than they realise. Not because they prefer to be wrong, but because bad data tends to tell a more comfortable story. It smooths over the channels that are underperforming. It inflates the metrics that justify existing budget allocations. It supports the narrative that the marketing team is working.
When you fix the data, the story changes. And sometimes the story that emerges is one that nobody in the room is particularly pleased to hear. I have seen this play out in turnaround situations where a business was measuring marketing activity rather than marketing impact. The volume of campaigns was high. The reporting looked busy. The dashboards were full of green. But when we stripped it back and tried to connect marketing investment to actual revenue outcomes, the link was tenuous at best. A lot of the activity was generating numbers that looked like progress without moving the business forward.
This is not a criticism of the people involved. It is a structural problem. If you build a measurement framework around activity metrics, you will get very good at measuring activity. You will optimise activity. You will report on activity. And the business will not necessarily grow as a result, because activity and impact are not the same thing, and bad data makes it very easy to confuse the two.
The HBR perspective on data quality cuts to the same point from a different angle. The organisations that treat data as a strategic asset, rather than an operational byproduct, tend to ask harder questions of it. They are less likely to accept a metric at face value. They are more likely to ask what the data is actually measuring and whether that measurement is a reliable proxy for what they care about.
The Measurement Problem Underneath the Data Problem
There is a layer beneath data quality that most organisations never reach, and it is the question of whether they are measuring the right things in the first place. You can have perfect data quality on the wrong metrics and still make consistently bad decisions. This is, in my view, the more common and more costly problem.
I spent several years judging the Effie Awards, which are specifically designed to recognise marketing effectiveness rather than creative quality. What struck me, going through hundreds of submissions, was how many campaigns could demonstrate impressive activity metrics and how few could demonstrate genuine business impact. Reach was high. Engagement was strong. Brand recall had moved. But the connection to revenue, to customer acquisition, to retention, was often thin or absent. The data was clean. The measurement framework was the problem.
The honest version of this is that most marketing measurement frameworks are built to justify marketing spend rather than to evaluate it. That is not cynicism. It is just an observation about incentive structures. If the people responsible for marketing are also responsible for measuring marketing, the metrics tend to converge on ones that make marketing look good. Bad data accelerates this dynamic because it provides convenient cover. When the numbers are murky, it is very easy to emphasise the ones that flatter and discount the ones that do not.
Tools like those covered in SEMrush’s overview of growth tools can provide useful signals, but they are a perspective on performance, not a complete picture of it. The discipline is in knowing what each tool measures, what it cannot measure, and how to triangulate across multiple imperfect data sources to arrive at something closer to the truth.
What Bad Data Does to Go-To-Market Strategy
Go-to-market strategy is particularly vulnerable to bad data because it is built on assumptions about customers, segments, and competitive positioning that are easy to get wrong and hard to correct once they are embedded in the plan. If your customer data is unreliable, your segmentation is unreliable. If your segmentation is unreliable, your targeting is unreliable. If your targeting is unreliable, your messaging is optimised for an audience that may not exist in the form you imagine it.
I have worked with businesses that had extremely detailed customer personas built from data that was years out of date, or from surveys with sample sizes too small to be statistically meaningful, or from internal assumptions that had never been validated externally. The personas looked rigorous. They had names and photographs and detailed psychographic profiles. But they were built on a foundation of bad data, and the go-to-market strategy built on top of them was pointing at a market that had either shifted or never quite existed in the way the business believed.
BCG’s work on go-to-market strategy in complex product launches highlights a related challenge: the organisations that struggle most with launches are often the ones that have the most elaborate planning processes but the least reliable market data feeding into them. The planning is sophisticated. The inputs are wrong. The outputs are therefore wrong, regardless of how sophisticated the planning was.
The fix is not more sophisticated planning. It is better data, validated more rigorously, with explicit acknowledgement of where the gaps are. A strategy built on honest approximation is more useful than one built on false precision. Knowing that your customer lifetime value estimate is probably within a reasonable range is more actionable than believing you know it to two decimal places when the underlying data does not support that level of confidence.
The Organisational Dimension of Data Quality
Data quality is in the end a people and process problem, not a technology problem. Organisations buy better tools and the data quality problem persists, because the tools are only as good as the processes and disciplines surrounding them. If nobody owns data quality, nobody maintains it. If there is no governance around how data is entered, structured, and validated, the quality degrades regardless of how sophisticated the CRM or analytics platform is.
BCG’s research on the alignment between marketing and broader organisational functions makes a point that resonates here: the organisations that perform best commercially tend to be the ones where marketing, sales, and operational functions share a common understanding of what is being measured and why. Data quality is not just a marketing problem. It is an organisational problem, and it requires organisational solutions.
In practice, this means someone has to own it. Not as a secondary responsibility attached to an existing role, but as a genuine priority with accountability attached. It means regular audits, not as a crisis response but as a routine process. It means training people who touch data on why consistency matters. And it means leadership that is willing to hear uncomfortable truths when the clean data tells a less flattering story than the dirty data did.
Feedback loops matter here too. Hotjar’s work on growth loops is a useful reminder that the best-performing organisations are not the ones with the most data. They are the ones with the tightest feedback loops between data, decision, action, and outcome. Bad data breaks those loops. You act, you observe, but what you observe is not an accurate reflection of what you did, so the learning does not accumulate in the way it should.
Fixing the Problem Without Waiting for Perfect Data
There is a version of this conversation that leads to paralysis. If the data is unreliable, how can you make decisions? The answer is that you cannot wait for perfect data, because perfect data does not exist. What you can do is be honest about the quality of what you have, build in appropriate uncertainty margins, and make decisions that are strong to a range of scenarios rather than ones that only work if a single precise assumption turns out to be correct.
The organisations that handle this best are the ones that treat data quality as a spectrum rather than a binary. They know which data points they can rely on with high confidence, which ones are directionally useful but imprecise, and which ones are genuinely unreliable and should not be used for significant decisions. That kind of calibration is more valuable than any individual dataset, because it allows the organisation to make appropriately confident decisions across the full range of what it knows and does not know.
Revenue intelligence platforms like those discussed in Vidyard’s Future Revenue Report point to a growing recognition among go-to-market teams that pipeline visibility depends on data integrity. You cannot identify untapped revenue potential if the data describing your existing pipeline is unreliable. The opportunity is only visible if the baseline is accurate.
Start with the decisions that matter most. What are the three or four strategic choices your organisation is likely to make in the next twelve months where data quality will have the most influence on the outcome? Audit the data that feeds those decisions specifically. Fix what you can. Document what you cannot. Make the uncertainty visible rather than papering over it with false confidence.
That is a more useful place to start than a comprehensive data quality programme that takes eighteen months and never quite finishes. Fix the data that matters for the decisions you are actually making. Everything else is a longer-term project.
The broader question of how data quality connects to commercial performance, go-to-market execution, and growth strategy is something I explore throughout the Go-To-Market and Growth Strategy section of The Marketing Juice. If this piece has raised questions about how your organisation measures what matters, that is a reasonable place to continue the conversation.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
