Measuring Marketing Impact: What the Numbers Won’t Tell You

Measuring marketing impact means connecting your marketing activity to business outcomes, not just recording what happened. Most teams track plenty of data. The gap is between the metrics they collect and the commercial decisions those metrics should inform.

That gap is wider than most marketing leaders want to admit. I’ve seen it up close, across agencies, across industries, and across budget sizes that would make your eyes water. The problem isn’t a shortage of data. It’s a shortage of honest interpretation.

Key Takeaways

  • Most marketing measurement tracks activity, not impact. The distinction determines whether your data drives decisions or just fills dashboards.
  • Vanity metrics survive because they’re easy to produce and hard to argue with. Replacing them requires commercial courage, not better software.
  • The fastest-converting campaigns often reveal the most about your measurement blind spots, not just your marketing effectiveness.
  • Honest approximation is more useful than false precision. Acknowledging what you can’t measure is part of a credible measurement approach.
  • If your measurement framework wouldn’t survive a sceptical CFO asking “so what?”, it isn’t finished yet.

Why Impact Is the Wrong Word for Most Marketing Reports

Impact implies causation. Most marketing reports show correlation at best, and coincidence at worst. When a campaign runs and revenue goes up, the instinct is to connect the two. Sometimes that connection is real. Often it isn’t, and the honest answer is that you can’t be sure which is which without more rigorous thinking than most teams apply.

I spent years on the agency side watching clients conflate campaign activity with business performance. A retailer runs a brand campaign, sales lift during the period, and the campaign gets the credit. Nobody asks whether the lift was seasonal, whether a competitor had a stock problem, or whether the sales team ran a promotion in the same window. The campaign looks successful. The client feels good. The agency keeps the budget. Everyone moves on.

That cycle is comfortable. It’s also corrosive, because it means your measurement framework is designed to confirm rather than interrogate. And if your measurement only confirms, you’re not measuring impact. You’re producing evidence for a conclusion you already wanted to reach.

The Marketing Analytics hub covers the full measurement landscape in depth, from setting up the right data infrastructure to building frameworks that hold up under scrutiny. This article focuses on a narrower question: what does measuring impact actually require, and where do most teams fall short of it?

The Moment I Understood What Fast Revenue Reveals

Early in my career, I was working on paid search at lastminute.com. We launched a campaign for a music festival, a relatively simple build, and within roughly a day we had six figures of revenue attributed to it. It was one of those moments that makes you feel like marketing is genuinely powerful.

But what that campaign actually revealed wasn’t just that paid search worked. It revealed something more specific: that there was latent demand for that product that we hadn’t been capturing. The search volume was there. The intent was there. We just hadn’t been present. The campaign didn’t create demand. It captured demand that already existed.

That distinction matters enormously for measurement. If you’re measuring a demand-capture campaign against the same metrics as a brand-building campaign, you’re comparing fundamentally different things. Demand capture is measurable quickly and directly. Brand building operates on a longer time horizon and resists direct attribution. Treating them the same way produces misleading numbers in both directions.

Most digital marketing, paid search especially, is demand capture. It’s valuable. But it’s not the same as creating demand. Measurement frameworks that don’t account for this distinction will consistently overstate the contribution of bottom-funnel activity and understate the contribution of everything that built the brand awareness those searches depend on.

What Vanity Metrics Actually Cost You

Vanity metrics persist because they’re easy to produce and difficult to argue with in a room full of people who don’t want to have a difficult conversation. Impressions, reach, engagement rate, click-through rate: none of these are inherently useless, but all of them become useless when they’re reported as proxies for business performance without any evidence that the relationship holds.

I’ve sat in enough board-level marketing reviews to know how this plays out. The marketing team presents a slide showing strong engagement metrics. The CFO asks what that means for revenue. The marketing director says it’s a brand awareness campaign and awareness doesn’t convert directly. The CFO nods, unconvinced. The budget gets scrutinised at the next planning cycle.

The tragedy is that the marketing director might be right about the campaign. Brand investment does matter. It does compound over time. But if the measurement framework can’t demonstrate any plausible connection between the activity and commercial outcomes, the argument is lost regardless of whether it’s correct.

Buffer has a useful breakdown of content marketing metrics worth tracking, and what’s striking is how clearly it separates activity metrics from outcome metrics. The separation is obvious when you lay it out. The problem is that most teams don’t lay it out. They report everything in the same column and hope nobody notices that half of it doesn’t connect to anything that matters commercially.

Semrush covers similar ground on content metrics and how to prioritise them. The consistent theme across both is that the metrics closest to revenue tend to be the ones teams track least rigorously, because they’re harder to attribute cleanly and easier to get wrong.

The CFO Test: A Simple Filter for Measurement Credibility

When I was running agencies, I developed a habit of applying what I called the CFO test to any measurement framework before presenting it to a client. The test is simple: imagine the most commercially sceptical person in the room reading your report. What question do they ask? If your framework can’t answer it, the framework isn’t finished.

The CFO’s question is almost always some version of “so what?” Not dismissively, but genuinely: so what does this mean for the business? What decision should I make differently because of this data? What would have happened if we hadn’t done this?

That last question, what would have happened without the activity, is the hardest one in marketing measurement. It’s the counterfactual question, and it’s the one that separates genuine impact measurement from reporting on things that happened while marketing was running.

You can approximate the counterfactual in various ways: holdout tests, geo-based experiments, pre/post analysis with appropriate controls, matched market testing. None of these are perfect. All of them are better than assuming that correlation equals causation and building your budget case on that assumption.

Forrester has written about the evolution of marketing reporting and the shift toward forward-looking, commercially grounded measurement. The direction of travel is clear: reporting on what happened is table stakes. The value is in explaining why it happened and what it means for what you do next.

Where Dashboards Go Wrong

Dashboards are seductive. They create the impression of control. A well-designed dashboard with clean visualisations and green arrows pointing upward feels like measurement. It isn’t, necessarily. It’s display. The measurement happened, or didn’t happen, before the dashboard was built.

I’ve seen organisations spend months and significant budget building dashboards that aggregate data beautifully and inform decisions not at all. The problem is usually that the dashboard was designed around what data was available, not around what decisions the business needed to make. Those are different starting points and they produce very different outputs.

MarketingProfs has covered how to build a marketing dashboard and separately examined whether marketing dashboards are worth the investment. Both pieces make the same underlying point: a dashboard is only as good as the thinking that preceded it. If you haven’t defined what decisions the dashboard needs to support, you’ll build something that looks comprehensive and tells you nothing useful.

The right sequence is: start with the business decision, identify what information would change that decision, determine what data would produce that information, then build the dashboard to surface it. Most teams reverse this. They start with the data they have, build a dashboard around it, and then try to retrofit it to business decisions. The result is a lot of dashboards that marketing teams understand and nobody else uses.

Email, Attribution, and the Metrics That Mislead

Email is worth examining specifically because it’s a channel where measurement feels precise but frequently isn’t. Open rates, click rates, unsubscribe rates: these are clean numbers that create a sense of clarity. The problem is that they measure engagement with the email, not impact on the business.

HubSpot’s guide to email marketing reporting does a good job of distinguishing between email-level metrics and downstream business metrics. The distinction is important because email attribution is notoriously unreliable. Last-click attribution gives all the credit to the email that preceded a conversion, ignoring everything else that contributed to the customer’s decision. First-touch attribution does the opposite. Neither reflects reality particularly well.

I managed email programmes for clients across multiple sectors during my agency years, and the most consistent finding was that the emails that drove the highest open rates were rarely the ones that drove the most revenue. Subject line optimisation is a legitimate discipline, but it optimises for opens, not for outcomes. If your email measurement stops at open rate, you’re measuring the wrong thing.

The more useful questions are: what did people who received this email do over the next 30 days, compared to people who didn’t? Did email recipients convert at a higher rate on other channels? Did the email sequence change purchase frequency or average order value? These questions require more sophisticated analysis and longer time windows. They also produce answers that actually matter.

The Data Infrastructure Question

Measurement quality is constrained by data quality, and data quality is constrained by infrastructure. This is the unglamorous part of marketing analytics that most strategy conversations skip over, because it involves talking to developers, understanding data pipelines, and accepting that the numbers you’ve been reporting might not be as clean as you thought.

GA4 is the default starting point for most organisations, and it has genuine strengths. But it also has limitations that matter for impact measurement. Sampling, session-based attribution, and the way GA4 handles cross-device journeys all introduce noise into your data. Moz has a useful explainer on why exporting GA4 data to BigQuery gives you more control and more accuracy for serious analysis. It’s not a step most small teams need, but for anyone trying to do rigorous impact measurement at scale, it matters.

GA4 is also not the only option. Moz’s overview of Google Analytics alternatives is worth reading if you’re questioning whether your current setup is fit for purpose. The right tool depends on your data volume, your team’s technical capability, and what questions you’re actually trying to answer. There’s no universal answer.

What I’d caution against is the assumption that better tools automatically produce better measurement. I’ve seen organisations migrate to more sophisticated platforms and produce exactly the same quality of insight they had before, because the problem was never the tool. It was the absence of clear questions that the measurement was supposed to answer.

The Honest Approximation Principle

One of the most commercially useful things I learned from years of agency leadership is that clients don’t need perfect measurement. They need honest measurement. The distinction sounds small. It isn’t.

Perfect measurement is impossible in marketing. There are too many variables, too many unmeasured touchpoints, too much that happens offline or in contexts you can’t observe. Chasing perfection produces paralysis, or worse, it produces false precision, where teams report numbers to two decimal places that are built on assumptions they’ve stopped questioning.

Honest approximation means being explicit about what you can measure directly, what you’re inferring, and what you genuinely can’t attribute. It means saying “we estimate this campaign contributed to a 12% lift in consideration, based on brand tracker data, but we can’t isolate it from the PR activity that ran in the same period.” That’s less satisfying than a clean attribution number. It’s also more accurate, and it builds more trust with commercial stakeholders than precision that doesn’t survive scrutiny.

When I judged the Effie Awards, the entries that impressed me most weren’t the ones with the most sophisticated attribution models. They were the ones where the team had clearly thought hard about what they could and couldn’t claim, and had built their effectiveness case on the solid ground rather than the shaky edges. Intellectual honesty about measurement limitations is a competitive advantage, not a weakness.

If you’re building out a broader analytics capability and want to understand how measurement fits into the larger picture, the Marketing Analytics hub covers everything from GA4 configuration to measurement planning in a way that connects the technical and commercial sides of the discipline.

What Good Impact Measurement Changes

When measurement is genuinely connected to impact, it changes behaviour. Not just reporting behaviour, but actual marketing decisions. Budgets shift toward what demonstrably works. Channels that look productive but aren’t get less resource. Teams stop defending activity for its own sake and start defending it for what it produces.

I’ve seen this happen in organisations that made the shift seriously. The immediate effect is usually uncomfortable, because honest measurement tends to reveal that some things that felt important weren’t producing much. The medium-term effect is almost always positive, because the budget and attention that was going to low-impact activity gets redirected to things that actually move the needle.

The longer-term effect is cultural. Teams that measure honestly develop a different relationship with their own work. They become more curious and less defensive. They ask better questions before launching campaigns, because they know they’ll have to answer for the results. That shift in orientation, from activity-focused to outcome-focused, is worth more than any specific measurement technique.

If I had to summarise what measuring marketing impact actually requires, it would be this: start with a business question, not a metric. Build your measurement around what would change a decision, not what’s easy to track. Be honest about what you can and can’t attribute. And apply the CFO test before you present anything, because if you can’t answer “so what?”, you haven’t finished the work.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between measuring marketing activity and measuring marketing impact?
Measuring activity tracks what your marketing did: impressions served, emails sent, clicks recorded. Measuring impact tracks what your marketing caused: revenue generated, demand created, customer behaviour changed. The difference is causation versus correlation. Most teams measure activity and call it impact, which produces data that looks useful but doesn’t support meaningful commercial decisions.
Why do vanity metrics persist in marketing reporting?
Vanity metrics persist because they’re easy to produce, hard to argue with, and rarely challenged by people who don’t want a difficult conversation. Impressions and engagement rates look impressive in a slide deck. They also require no proof of commercial connection. Replacing them with outcome-based metrics requires both better measurement infrastructure and the commercial courage to report numbers that might be less flattering.
How do you measure the impact of brand marketing when it doesn’t convert directly?
Brand measurement requires different tools and longer time horizons than performance marketing. Brand trackers measure awareness, consideration, and preference over time. Econometric modelling can separate brand effects from short-term promotional effects. Holdout tests can isolate the contribution of brand activity in specific markets. None of these are perfect, but together they can build a credible approximation of brand impact without relying on last-click attribution, which systematically undervalues brand investment.
What is the counterfactual question in marketing measurement and why does it matter?
The counterfactual question asks what would have happened if the marketing activity hadn’t run. It’s the question that separates genuine impact measurement from coincidence. If revenue goes up during a campaign, the counterfactual question is whether revenue would have gone up anyway. Answering it properly requires holdout tests, geo-based experiments, or matched market analysis. Without some form of counterfactual thinking, you’re attributing outcomes to marketing that may have happened regardless.
How should marketing teams prioritise which metrics to track?
Start with the business decisions that marketing data needs to inform, then work backwards to the metrics that would change those decisions. If a metric wouldn’t cause you to do anything differently, it probably doesn’t need to be in your primary reporting. Prioritise metrics closest to commercial outcomes: revenue, customer acquisition cost, retention rate, and contribution to pipeline. Track activity metrics as diagnostic tools, not as headline performance indicators.

Similar Posts