Measuring Marketing Impact Without Fooling Yourself

Measuring marketing impact means connecting what you spend to what the business actually gets back, in terms that finance, leadership, and the market itself would recognise. It sounds straightforward. In practice, most marketing teams measure activity and call it impact, which is a different thing entirely.

The gap between those two things, activity and impact, is where most measurement frameworks quietly fall apart. You can have a dashboard full of green numbers and still be running a function that makes almost no difference to the business. I’ve seen it. I’ve been inside organisations where the reporting looked healthy right up until someone asked whether the marketing was actually working.

Key Takeaways

  • Most marketing teams measure activity metrics, not business impact. The two are not the same, and confusing them is expensive.
  • The closer a metric sits to a commercial outcome, the more useful it is. Impressions and engagement are proxies at best, distractions at worst.
  • Attribution models don’t tell you what caused a result. They tell you which touchpoints were present. That distinction matters enormously when you’re making budget decisions.
  • Honest approximation beats false precision. A directionally correct measurement framework is more useful than a technically sophisticated one that nobody trusts or acts on.
  • If better measurement would expose how little some of your activity is doing, that’s exactly why you need it.

Why Most Marketing Measurement Stops at Activity

There’s a gravitational pull in marketing toward measuring what’s easy to measure. Clicks, impressions, open rates, follower counts. These numbers are available, they update in real time, and they tend to go up if you’re doing anything at all. The problem is that none of them tell you whether the business is better off because of what you did.

I spent years on the agency side managing performance marketing budgets across dozens of clients. One thing I noticed consistently: the clients who were most comfortable with their measurement were often the ones with the most to hide. They’d built reporting systems that were optimised for comfort, not clarity. Every metric was something the team could influence directly. Nothing was connected to revenue in a way that would survive scrutiny.

This isn’t laziness. It’s a rational response to an uncomfortable truth. If you measure marketing by its actual impact on business performance, some of what you’re doing won’t survive that test. Measuring activity is safer. It’s also, over time, corrosive, because it disconnects marketing from the commercial reality it’s supposed to serve.

If you want a broader grounding in how analytics frameworks should be structured, the Marketing Analytics hub covers the principles and tools that make measurement useful rather than decorative.

What Does Genuine Marketing Impact Look Like?

Genuine impact means the business is in a materially better position because of what marketing did. That can take different forms depending on the context: more revenue, lower customer acquisition cost, faster sales cycles, stronger retention, higher average order value, improved brand consideration in a category where you’re trying to grow. The form it takes depends on what the business is trying to achieve.

The common thread is that impact is always expressed in business terms, not marketing terms. Reach is a marketing term. Revenue is a business term. Share of voice is a marketing term. Market share is a business term. The closer your measurement sits to the business outcome, the more useful it is.

Early in my career, I ran a paid search campaign for a music festival while at lastminute.com. The campaign was not complicated. The targeting was tight, the creative was functional, and the bid strategy was straightforward. What it produced was six figures of revenue in roughly a day. Nobody needed to debate whether that campaign had impact. The connection between what we did and what the business got back was direct and legible. Not all marketing works that way, and I’m not suggesting it should, but that experience set a standard in my head for what impact actually means. When you can see it, you know it. When you can’t, you’re usually measuring something else.

The Attribution Problem Nobody Wants to Talk About Honestly

Attribution is where marketing measurement gets genuinely complicated, and where a lot of confident-sounding frameworks quietly break down. The core problem is this: attribution models tell you which touchpoints were present in the path to a conversion. They do not tell you which touchpoints caused the conversion. That distinction is not a technicality. It’s the difference between making good budget decisions and making confident-sounding bad ones.

Last-click attribution, which still dominates more reporting than it should, assigns full credit to the final touchpoint before conversion. This consistently flatters channels like branded paid search and retargeting, which tend to appear at the end of the experience, and systematically undervalues the channels that created the demand in the first place. The result is that you optimise toward demand capture and starve demand creation. Over time, the pipeline dries up and nobody can quite explain why.

Data-driven attribution models, which GA4 now uses by default, are more sophisticated, but they carry their own limitations. They’re trained on your own conversion data, which means they reflect your historical patterns rather than causal truth. They also struggle with channels that are harder to track, offline touchpoints, and the long, diffuse influence of brand activity. Understanding what GA4’s attribution model is actually doing under the hood matters more than most marketers realise.

None of this means attribution is useless. It means you should treat it as one input among several, not as a ground truth. The most dangerous thing you can do with an attribution model is believe it completely.

The Metrics That Actually Tell You Something

Different channels and functions produce different types of evidence. The mistake is applying the same measurement logic to everything, or worse, rolling everything up into a single dashboard number that smooths out all the signal.

For performance channels, the metrics that matter are the ones closest to commercial outcomes: cost per acquisition, return on ad spend, revenue per channel, and contribution to pipeline. These are not perfect, because attribution is imperfect, but they’re the right level of abstraction. Tracking click-through rate as a primary metric for a paid search campaign is like judging a sales rep by how many calls they make rather than how many deals they close.

For content and organic channels, the connection to commercial outcomes is less direct and the measurement needs to reflect that. Organic traffic, keyword rankings, and time-on-page are useful signals, but they need to be connected to downstream behaviour. Content marketing metrics are most useful when they’re segmented by intent and traced through to conversion, rather than reported in aggregate as evidence that content is “working.”

For email, the metrics that matter depend entirely on what the email is supposed to do. Open rate is a hygiene metric, useful for diagnosing deliverability and subject line performance, not for measuring whether the programme is valuable to the business. Meaningful email metrics are the ones that connect to revenue, retention, or the specific action the email was designed to drive.

For brand activity, the measurement is harder and the timescales are longer. Brand tracking, share of search, and category consideration data are the right tools here. They’re slower, less precise, and more expensive to run than performance metrics. They’re also the only way to measure whether brand investment is doing anything. Trying to measure brand activity with performance metrics is a category error that produces misleading results and bad decisions.

Why Dashboards Often Make Measurement Worse

There’s an industry around marketing dashboards that has convinced a lot of organisations they need more data, better visualised, more frequently updated. In my experience, this is usually the wrong direction. The problem most marketing teams have is not insufficient data. It’s too much data, poorly prioritised, with no clear line from the numbers to a decision.

A dashboard that shows 40 metrics is not a measurement system. It’s a way of making everyone feel informed while nobody is accountable for anything specific. Forrester’s observation that reporting more doesn’t mean reporting better is one of those things that’s obvious once someone says it and ignored constantly in practice.

The question to ask about any dashboard is not “what does this show?” but “what decision does this enable?” If you can’t answer that question for a given metric, the metric probably doesn’t belong in the dashboard. The investment in marketing dashboards only pays off when the outputs are connected to decisions that affect performance. Otherwise you’re spending time and money producing reports that make people feel busy.

When I was running agencies, one of the most useful things I could do for a new client was strip their reporting back. Not because detail is bad, but because most reporting systems accumulate metrics over time without ever removing the ones that stopped being useful. The result is a dashboard that takes an hour to walk through and leaves everyone less clear than when they started.

Incrementality: The Question That Changes Everything

The most important question in marketing measurement is one that most teams never formally ask: what would have happened without this? That’s the question of incrementality, and it reframes every conversation about marketing impact.

Without incrementality thinking, you can easily convince yourself that a channel is driving revenue when it’s actually just showing up at the end of a experience that would have completed anyway. Branded search is the classic example. People who are already going to buy from you search for your brand name and click your paid ad. The ad gets credited with a conversion. The conversion would have happened regardless. The incremental value of that spend is close to zero, possibly negative when you account for the cost.

Running incrementality tests, geo holdout tests, conversion lift studies, matched market experiments, is not straightforward. It requires planning, patience, and a willingness to accept results that might be uncomfortable. But it’s the closest thing to a ground truth that most marketing teams can access. Everything else is correlation dressed up as causation.

I’ve seen the results of incrementality tests surprise even experienced marketers. Channels that looked indispensable in the attribution model turned out to be largely redundant. Channels that looked like supporting players turned out to be doing most of the actual work. The gap between attributed performance and incremental performance can be significant, and it has real consequences for where budget should go.

Building a Measurement Framework That’s Actually Used

A measurement framework that lives in a document and gets referenced once a quarter is not a measurement framework. It’s a planning artefact. The test of whether your measurement approach is working is whether it changes decisions. If the same decisions would be made without the data, the measurement isn’t functioning.

Useful measurement frameworks share a few characteristics. They’re connected to specific business objectives, not to channel activity. They have a small number of primary metrics that everyone agrees matter, with secondary metrics that provide diagnostic context rather than strategic direction. They have a clear cadence: what gets reviewed daily, weekly, monthly, and quarterly, with different questions at each level. And they have an owner, someone who is accountable for the quality and accuracy of the data, not just for producing the report.

The technical infrastructure matters too. GA4 has changed how event tracking works, and getting it right requires deliberate setup rather than relying on defaults. Custom event tracking in GA4 is one of those areas where a small amount of upfront work produces significantly better data over time. The default implementation tells you a lot about traffic. A well-configured implementation tells you about behaviour that connects to commercial outcomes.

Automation has a role, but it’s not a substitute for judgment. Automating dashboard reporting can save time and reduce error, but it doesn’t solve the harder problem of deciding what to measure and what to do about it. The human part of measurement is the part that matters most.

One thing I’d add from experience: the best measurement frameworks I’ve seen were built collaboratively, with input from finance, commercial leadership, and the marketing team itself. When measurement is designed by marketing for marketing, it tends to optimise for comfort. When it’s designed with finance in the room, it tends to optimise for accountability. The second version is harder to build and more useful to have.

The Honest Approximation Principle

Marketing measurement will never be perfect. The customer experience is not a clean linear sequence. Influence is diffuse and sometimes invisible. The counterfactual, what would have happened without the marketing, is never directly observable. Anyone who tells you they’ve solved this is either selling something or hasn’t thought hard enough about the problem.

What you can have is honest approximation: a measurement approach that’s directionally correct, consistently applied, and genuinely connected to business outcomes. That’s more valuable than a technically sophisticated system that nobody understands or trusts, or a precise-looking model built on assumptions nobody has examined.

The discipline is in being clear about what you know, what you’re estimating, and what you genuinely don’t know. That kind of intellectual honesty is rarer in marketing than it should be, partly because there’s pressure to appear confident and data-driven at all times. But the teams that are most effective at measurement are usually the ones that are most comfortable saying “we think this is working based on these signals, and consider this we’d need to see to be more certain.”

If you want to go deeper on the tools and frameworks that support this kind of measurement, the Marketing Analytics hub at The Marketing Juice covers everything from GA4 configuration to attribution approaches and beyond.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between measuring marketing activity and measuring marketing impact?
Activity metrics measure what marketing does: clicks, impressions, emails sent, content published. Impact metrics measure what the business gets back: revenue generated, cost per acquisition, pipeline contribution, customer retention. The two are related but not the same. You can have strong activity metrics and weak business impact, which is a common and expensive situation. Genuine measurement of marketing impact requires connecting what marketing does to outcomes the business cares about, expressed in commercial terms.
Why is attribution not the same as measuring marketing impact?
Attribution models show which touchpoints were present in the path to a conversion. They don’t show which touchpoints caused the conversion. A customer who was already going to buy from you might click a branded search ad on the way to checkout. The ad gets attributed credit. The incremental value it added may be close to zero. Attribution is a useful input, but treating it as a direct measure of impact leads to misallocated budget and overconfidence in channels that capture demand rather than create it.
How do you measure the impact of brand marketing?
Brand marketing operates on longer timescales and through more diffuse mechanisms than performance marketing, so it requires different measurement tools. Brand tracking surveys, share of search, category consideration data, and net promoter score are the most common approaches. Matched market tests and media mix modelling can also provide evidence of brand impact on commercial outcomes over time. Trying to measure brand activity using performance metrics like click-through rate or last-click conversions produces misleading results and tends to cause underinvestment in brand over time.
What is incrementality testing and why does it matter?
Incrementality testing measures the difference in outcomes between a group exposed to marketing activity and a comparable group that wasn’t. It answers the question: what would have happened without this? Common approaches include geo holdout tests, where you run activity in some markets but not others, and conversion lift studies run through ad platforms. Incrementality matters because it’s the closest thing to a causal measure of marketing impact. Without it, you’re measuring correlation, which can look convincing but lead to significantly wrong budget decisions.
How many metrics should a marketing measurement framework include?
Fewer than most teams think. A useful framework typically has three to five primary metrics that are directly connected to business objectives, supported by a larger set of diagnostic metrics that help explain movements in the primary ones. The primary metrics should be the ones that drive decisions and accountability. If a metric doesn’t change what you do, it probably doesn’t belong at the primary level. Most marketing dashboards include too many metrics, which diffuses attention and makes it harder to act on what the data is showing.

Similar Posts