Data-Driven Marketing: What the Numbers Won’t Tell You
Data-driven marketing means using customer behaviour, campaign performance, and market signals to make decisions rather than relying on instinct alone. Done well, it reduces waste, sharpens targeting, and connects spend to outcomes. Done poorly, it produces confident-sounding decisions built on the wrong questions.
Most organisations have more data than they know what to do with. The constraint is rarely access to numbers. It is knowing which numbers matter, what they are actually measuring, and where they stop being useful.
Key Takeaways
- Data tells you what happened. It rarely tells you why, or what to do next. That gap requires judgement.
- Most attribution models reward the last interaction before purchase, which systematically undervalues the work that created demand in the first place.
- Optimising toward captured intent is not the same as generating growth. The two require different data, different channels, and different success metrics.
- Clean data architecture matters more than sophisticated analytics. Garbage in, garbage out , no matter how good your dashboard looks.
- The companies that use data well treat it as one input into a decision, not as the decision itself.
In This Article
- Why Most Organisations Are Measuring the Wrong Things
- The Attribution Problem Nobody Wants to Admit
- Captured Intent vs. Created Demand: A Distinction That Changes Everything
- What Good Data Infrastructure Actually Looks Like
- The Difference Between Data-Informed and Data-Dependent
- Where Data-Driven Strategy Breaks Down
- Building a Data Strategy That Actually Supports Growth
Why Most Organisations Are Measuring the Wrong Things
Early in my career, I was convinced that performance marketing was the engine of growth. The numbers were right there: cost per click, cost per acquisition, return on ad spend. Everything was attributable, everything was optimisable, and the story it told was clean and satisfying. It took me years to properly interrogate that story.
What I eventually understood is that a significant portion of what performance marketing claims credit for was going to happen anyway. Someone who has already decided to buy your product, who searches your brand name and clicks a paid ad, has been counted as an acquisition. But the ad did not create that intent. Something else did, and that something else is often invisible in the data.
This is the core problem with defaulting to lower-funnel metrics as your primary measure of marketing effectiveness. You end up optimising the last step of a experience you had very little to do with. The channels that actually built awareness and preference, the ones that reached someone before they knew they wanted you, rarely get the credit because they are harder to measure.
If you are thinking about how data fits into a broader commercial strategy, the go-to-market and growth strategy hub covers the wider picture, including how measurement connects to market positioning and commercial planning.
The Attribution Problem Nobody Wants to Admit
Attribution is not a solved problem. It is a set of approximations, each with different flaws, and the one most commonly used, last-click, has a flaw that is particularly damaging to long-term growth strategy.
Last-click attribution assigns full credit to the final touchpoint before conversion. It sounds logical until you think about what that actually rewards. It rewards being present at the moment of purchase, not at the moment of influence. Paid search, retargeting, and brand-direct traffic consistently win under this model. Brand-building, content, social, and upper-funnel activity consistently lose, not because they are not working, but because their contribution happens earlier and is harder to trace.
I have sat in client meetings where the recommendation was to cut display and content budgets because the attribution data showed low direct return. In almost every case, when those cuts were made, performance marketing efficiency dropped within two quarters. The demand that had been created upstream dried up, and suddenly the lower-funnel channels were working much harder for much worse results. The data had told a true story about one thing while hiding the full picture.
Multi-touch attribution models are an improvement, but they introduce their own distortions. Data-driven attribution, where platforms use machine learning to assign credit across touchpoints, sounds more sophisticated, but it still operates within the closed ecosystem of whatever platform is doing the measuring. Google’s data-driven attribution is not neutral. It is built on Google’s data, which naturally tends to surface Google’s channels as important.
The honest answer is that no single attribution model gives you the full picture. What you need is a combination of approaches: platform attribution for operational decisions, media mix modelling for strategic budget allocation, and incrementality testing to validate whether your activity is genuinely driving additional outcomes. Forrester’s intelligent growth model has long argued for this kind of layered measurement approach, and the underlying logic holds.
Captured Intent vs. Created Demand: A Distinction That Changes Everything
There is a useful way to split marketing activity into two categories. The first captures existing demand: people who already want something and are looking for it. The second creates new demand: reaching people who are not yet in the market and making your category, and your brand, relevant to them.
Most data-driven marketing strategies are very good at the first category and almost blind to the second. This is not a criticism of data. It is a structural limitation. People who are actively searching for a product leave a clear trail. People who have never considered your category leave almost none.
Think about a clothes shop. Someone who walks in and tries something on is far more likely to buy than someone who walks past. The act of trying on changes the probability of purchase. But if you only measure transactions, you miss the importance of getting people through the door in the first place. The data will tell you what converted. It will not tell you what caused someone to walk in.
Growth, genuine growth, requires reaching new audiences rather than more efficiently harvesting the ones already looking for you. Market penetration strategy is partly about this: expanding your reach into segments that are not yet customers, which requires a different data strategy than optimising for existing intent.
The data you need to support demand creation looks different. Reach and frequency metrics matter. Brand awareness tracking matters. Category entry point research matters. Share of search, as a proxy for brand consideration, matters. These are softer signals than a cost-per-acquisition figure, but they are measuring something real, and ignoring them produces a skewed picture of where your growth is actually coming from.
What Good Data Infrastructure Actually Looks Like
Before you can make good decisions from data, the data has to be reliable. This sounds obvious, but the number of organisations running sophisticated analytics on fundamentally broken data pipelines is higher than most people admit.
When I was running an agency and we took on a new client, one of the first things we did was audit their tracking. Not their strategy, not their creative, their tracking. More often than not, we found conversion events firing incorrectly, session data fragmented across subdomains, UTM parameters being stripped by redirects, or CRM data that had never been properly connected to ad platform data. The dashboards looked polished. The underlying data was a mess.
Clean data architecture involves a few non-negotiable foundations. First, a single source of truth for customer data, ideally a properly implemented CRM that is actually used by the teams generating and acting on that data. Second, consistent UTM tagging conventions applied across every channel and every campaign, without exceptions. Third, server-side tracking where possible, particularly given the ongoing degradation of cookie-based measurement. Fourth, regular audits rather than a one-time setup and forget approach.
Tools like Hotjar for behavioural data and session recording add a qualitative layer that pure analytics cannot provide. Watching how real users interact with a page will surface problems that conversion rate data alone will not explain. The number tells you something is wrong. The session recording tells you what.
The point is not to have more data. It is to have data you can trust. A smaller set of reliable metrics beats a sprawling dashboard of questionable ones every time.
The Difference Between Data-Informed and Data-Dependent
There is a meaningful difference between using data to inform decisions and requiring data to make any decision at all. The first is good practice. The second is a form of organisational paralysis dressed up as rigour.
I have seen marketing teams refuse to test a new channel because there was no historical data to support it. Which is, of course, circular logic. You cannot get data on a channel you have never tested. At some point, you have to make a bet based on incomplete information, and the skill is in making that bet intelligently rather than waiting for certainty that will never arrive.
The best marketing operators I have worked with treat data as one input among several. They use it to validate or challenge hypotheses, to spot patterns they would not have noticed manually, and to make the case for decisions they have already thought through carefully. They do not use it to outsource the thinking.
This matters particularly when data points in a direction that feels wrong. If your attribution data says cut brand spend, but your commercial instinct says that brand spend is what is feeding your pipeline, the right response is not to immediately follow the data. It is to interrogate the model. What is the data actually measuring? What is it not measuring? What would happen if you ran a hold-out test?
Analytics tools are a perspective on reality, not reality itself. The map is not the territory. And the companies that forget this tend to optimise themselves into a corner, chasing metrics that look good while the underlying business quietly deteriorates.
Where Data-Driven Strategy Breaks Down
Data is retrospective. It tells you what happened, in the channels you were measuring, among the audiences you were already reaching. It is structurally limited in its ability to tell you what to do next, particularly when “next” involves doing something genuinely new.
This is one reason why data-driven strategies tend to converge over time. Everyone is optimising toward the same signals, using similar tools, on the same platforms. The result is a kind of regression to the mean where everyone ends up competing for the same intent with the same formats. The differentiation that actually drives brand preference gets squeezed out because it is hard to measure in the short term.
There is also a customer experience dimension that data frequently misses. I have long believed that if a company genuinely delighted its customers at every opportunity, that alone would drive a significant amount of growth. Marketing is often deployed as a blunt instrument to prop up companies with more fundamental product or service issues. Data can tell you that churn is high. It cannot always tell you that the product is mediocre, or that the onboarding experience is frustrating, or that the customer service team is undertrained. Those problems show up in the numbers eventually, but by the time they do, a lot of marketing budget has been spent trying to fill a leaking bucket.
Go-to-market execution feels harder than it used to, and part of the reason is that data has raised the floor for everyone while not necessarily raising the ceiling. Everyone can now avoid the most obvious mistakes. Fewer people are doing the harder work of building something genuinely worth choosing.
Building a Data Strategy That Actually Supports Growth
A functional data-driven marketing strategy is not a technology stack. It is a set of decisions about what you are going to measure, why, and how you are going to act on it. The technology is in service of those decisions, not the other way around.
Start with the business question, not the data source. What are you actually trying to understand? Are you trying to figure out which channels are driving new customer acquisition? Are you trying to understand why your conversion rate is lower than it should be? Are you trying to identify which customer segments are most valuable over time? Each question requires different data, different measurement approaches, and different success criteria.
Then build your measurement framework around leading and lagging indicators. Lagging indicators, revenue, customer acquisition, retention, tell you how the business is performing. Leading indicators, brand search volume, engagement rates, pipeline velocity, tell you where it is heading. Most organisations over-index on lagging indicators because they are easier to connect to spend. The leading indicators are where you get early warning.
BCG’s work on commercial transformation consistently points to the importance of connecting marketing metrics to commercial outcomes rather than treating them as separate reporting tracks. The marketing dashboard and the business dashboard should be telling the same story, with marketing metrics clearly linked to the business results they are meant to drive.
Finally, build in a regular cadence of challenging your own assumptions. If your data consistently confirms what you already believed, that is a warning sign, not a reassurance. Good data strategy includes actively looking for evidence that your current approach is wrong. That is where the genuinely useful insights tend to live.
For more on how data strategy connects to commercial planning and go-to-market execution, the growth strategy hub covers the broader frameworks that sit around measurement, including how to structure your market approach and align marketing activity to business objectives.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
