Measuring Marketing Strategy: What Most Teams Get Wrong

Measuring marketing strategy means connecting your marketing activity to business outcomes, not just campaign metrics. Most teams measure the wrong things, credit the wrong channels, and draw the wrong conclusions, which means they keep investing in activity that looks productive but contributes little to actual growth.

The fix is not more dashboards. It is a clearer framework for what you are trying to measure, why it matters, and how honest you are willing to be about what the numbers are actually telling you.

Key Takeaways

  • Most marketing measurement is built around channel metrics, not business outcomes, which means teams optimise for the wrong things.
  • Performance marketing captures existing demand more than it creates new demand. Crediting it for growth you would have had anyway is one of the most common and costly measurement errors.
  • A measurement framework needs three layers: activity metrics, leading indicators, and lagged business outcomes. Most teams only track the first.
  • Honest approximation beats false precision. A directionally correct read on marketing performance is more useful than a precise number measuring the wrong thing.
  • If you cannot connect your marketing spend to a business outcome within a defined timeframe, that is a strategy problem, not a reporting problem.

Why Most Marketing Measurement Fails Before It Starts

I spent years reviewing marketing performance reports that looked impressive on paper. Click-through rates up. Cost per acquisition down. Return on ad spend at a healthy multiple. And yet the businesses behind those reports were not growing at the rate the marketing activity seemed to promise. When I dug into the numbers, the same pattern kept surfacing: the measurement framework had been built to show activity, not impact.

This is not a technology problem. The tools exist. Semrush, analytics platforms, attribution software, there is no shortage of ways to collect data. The problem is that measurement frameworks are usually designed by people who are incentivised to show that their work is working. Which means the metrics chosen tend to be the ones most likely to look good, not the ones most likely to be true.

When I was running an agency and we were pitching for retained contracts, clients would often show us their current reporting. Impressions, clicks, engagement rates, cost per lead. Rarely revenue. Rarely market share. Rarely any metric that a CFO would use to evaluate a business investment. The gap between what marketing reports and what business leadership cares about is where most measurement frameworks fall apart.

If you are thinking about how measurement fits into your broader go-to-market approach, the Go-To-Market and Growth Strategy hub covers the wider strategic context, including how to build plans that connect activity to commercial outcomes from the start.

What Are You Actually Trying to Measure?

Before you build a measurement framework, you need to be clear on what marketing strategy is supposed to achieve. Not in abstract terms. In commercial terms. Revenue growth, customer acquisition, retention, market penetration, brand preference among a defined audience. Pick the ones that matter to your business at this stage of growth, and build your measurement around those outcomes.

The reason this sounds obvious but rarely happens in practice is that different stakeholders want different things from marketing measurement. The performance team wants to justify their channel spend. The brand team wants to show awareness is building. The CEO wants to know if marketing is driving growth. These are not always the same question, and trying to answer all of them with a single dashboard usually means answering none of them well.

A workable measurement framework has three layers, and most teams only operate at the first.

The first layer is activity metrics: impressions, clicks, cost per click, engagement rate, email open rate. These tell you whether your campaigns are running and whether people are responding to them. They are useful for optimisation but they are not evidence of business impact.

The second layer is leading indicators: things that move before revenue does. Brand search volume, share of voice in your category, new audience reach, pipeline velocity, time to conversion. These are harder to measure but they are much closer to what strategy is actually trying to shift.

The third layer is lagged business outcomes: revenue, customer acquisition cost, lifetime value, market share, net revenue retention. These are what the business cares about. They move slowly and they are influenced by many factors beyond marketing, which is exactly why most marketing teams avoid anchoring their reporting here. But if your marketing strategy cannot be connected to at least one of these outcomes within a defined timeframe, you have a strategy problem, not a reporting problem.

The Performance Marketing Attribution Trap

Earlier in my career I was a true believer in lower-funnel performance marketing. The numbers were clean, the attribution was direct, and the case for investment was easy to make. It took me longer than I would like to admit to recognise that a significant portion of what performance marketing was being credited for was going to happen anyway.

Think about how paid search actually works in most categories. Someone has already decided they want to buy something. They search for it. Your ad appears. They click. They convert. The attribution model credits the paid click with the conversion. But the person was already in market. They were going to find you or a competitor regardless. The ad may have influenced which brand they chose, or it may have simply been the last touchpoint in a experience that was already decided.

I use a simple mental model for this. Think about a clothes shop. Someone who walks in and tries something on is far more likely to buy than someone who just browses the window. The fitting room visit is the conversion event. But the decision to enter the shop, to engage with that brand at all, was made before the fitting room. If you only measure the fitting room, you miss everything that drove the customer through the door in the first place.

Performance marketing is excellent at capturing demand. It is much less effective at creating it. And businesses that measure only their performance channels end up with a distorted picture of what is driving growth. They over-invest in capturing existing intent and under-invest in building the awareness and preference that creates future intent. This is one reason why go-to-market execution feels increasingly difficult for many teams despite rising ad spend.

The measurement implication is straightforward. If you want to understand whether your marketing strategy is working, you need to measure new audience reach alongside conversion metrics. You need to track brand search volume as an indicator of awareness building. You need to look at whether you are acquiring genuinely new customers or simply converting people who were already close to buying.

How to Build a Measurement Framework That Holds Up

A measurement framework that actually serves your strategy needs four things: a clear business objective, a defined set of metrics at each layer, a baseline to measure against, and a timeframe that is honest about how long marketing takes to work.

Start with the business objective. Not “increase brand awareness.” Something specific: acquire 2,000 new customers in the next 12 months at a cost per acquisition below a defined threshold, or grow revenue in a new segment from zero to a meaningful contribution within two years. The specificity matters because vague objectives produce vague measurement.

Then work backwards. What leading indicators would move if this objective were on track? If you are trying to grow a new segment, you would expect to see rising brand search in that segment, increasing share of voice in relevant channels, and growing pipeline from that audience before you see revenue. Map those indicators and start tracking them from day one.

Set a baseline before you start. This is the step most teams skip because they are in a hurry to launch. But without a baseline, you cannot measure change. You can only measure current state, which tells you nothing about whether your strategy is working. When I was turning around a loss-making agency, one of the first things I did was establish where we actually were across every commercial metric before we changed anything. It was uncomfortable. The numbers were bad. But it gave us something honest to measure against.

Be honest about timeframes. Brand-building activity takes 6 to 18 months to show up in commercial metrics. If you are measuring brand investment on a 90-day cycle and declaring it ineffective because revenue has not moved, you are measuring the wrong thing at the wrong time. Different types of marketing activity have different lag times, and your measurement framework needs to reflect that.

Understanding how market penetration strategy works is useful context here, because the metrics that matter for penetration are fundamentally different from the metrics that matter for retention or loyalty. Your measurement framework should be calibrated to the type of growth you are pursuing.

The Incrementality Question Nobody Wants to Ask

The most uncomfortable question in marketing measurement is: what would have happened if we had not run this campaign? Not “did we get results?” but “did we get results we would not have got anyway?”

This is the incrementality question, and most marketing teams avoid it because the honest answer is often deflating. I judged the Effie Awards, which are specifically designed to reward marketing effectiveness, and even there the standard of causal evidence for marketing impact was variable. Many entries could demonstrate correlation between campaign activity and business results. Far fewer could demonstrate that the campaign caused the result rather than coincided with it.

You do not need a controlled experiment to get a useful read on incrementality. There are practical approaches. Geographic split tests, where you run activity in some markets and not others, are one of the most accessible. Holdout groups in CRM campaigns are another. Even simple time-series analysis, looking at what happened to a metric before, during, and after a campaign, can give you a directional read on whether the activity made a difference.

The point is not to achieve academic rigour. The point is to be honest about what you know and what you are assuming. A marketing team that says “we believe this campaign drove approximately X in incremental revenue, based on this evidence, with these caveats” is far more commercially credible than one that presents an attribution report as if it were proof of causation.

Where Marketing Mix Modelling Fits In

Marketing mix modelling (MMM) has had a resurgence as third-party cookies have declined and last-click attribution has become less reliable. For businesses with sufficient data and budget, it offers a way to estimate the contribution of different marketing activities to business outcomes without relying on individual-level tracking.

MMM is not perfect. It requires historical data, it takes time to build and calibrate, and it is a statistical model, not a ground truth. But it has one significant advantage over most attribution approaches: it can account for offline activity, brand investment, and external factors like seasonality and competitive activity, which last-click models simply cannot see.

For smaller businesses without the data volume to run MMM, the principles still apply. You can do a simplified version manually. Look at your spend by channel over 12 to 24 months alongside your revenue and acquisition data. Look for patterns. Look for periods where spend increased but results did not follow, or where results improved without a corresponding increase in spend. These anomalies are often more instructive than the periods where everything moved in the expected direction.

BCG’s research on scaling agile approaches is relevant here because the same principle applies to measurement: you need feedback loops that are fast enough to be useful but grounded enough to be accurate. The cadence of your measurement matters as much as the metrics themselves.

Common Measurement Mistakes and How to Avoid Them

After two decades of reviewing marketing performance across more than 30 industries, the same mistakes appear with remarkable consistency.

The first is measuring outputs instead of outcomes. Impressions, clicks, and engagement are outputs. Revenue, customer acquisition, and market share are outcomes. A campaign can generate millions of impressions and deliver zero commercial value. If your reporting stops at outputs, you will never know.

The second is letting the most measurable channel take the most credit. Digital channels produce data. TV, out of home, and sponsorship produce less of it. So digital gets credited, and everything else gets questioned. This creates a systematic bias in investment decisions that tends to favour short-term, lower-funnel activity over the brand-building work that creates long-term growth.

The third is changing the metrics when the results are uncomfortable. I have seen this happen in agencies and in client-side teams. A campaign underperforms on the original KPI, so the reporting shifts to a metric where it looks better. This is not measurement, it is narrative management. It destroys the credibility of the marketing function and, more importantly, it prevents the organisation from learning anything useful.

The fourth is measuring marketing in isolation from the rest of the business. Marketing does not operate in a vacuum. Sales team performance, product quality, pricing, customer service, competitive activity, all of these influence the outcomes that marketing is trying to drive. A measurement framework that attributes all movement in commercial metrics to marketing activity is as misleading as one that attributes nothing to it.

The fifth is over-engineering the measurement framework before you have the basics right. I have worked with businesses that spent months building elaborate attribution models before they had clean data on basic things like customer acquisition cost or revenue by channel. Get the fundamentals right first. Honest, simple measurement beats sophisticated measurement built on dirty data.

Connecting Measurement to Strategic Decisions

Measurement only has value if it changes decisions. This sounds obvious, but a remarkable amount of marketing measurement is produced, reviewed, filed, and forgotten without influencing anything. If your monthly marketing report does not result in at least one concrete decision, the report is theatre.

The most useful measurement frameworks are built backwards from decisions. Start by asking: what are the decisions we need to make in the next quarter? Where to invest, which channels to scale, which audiences to prioritise, whether to continue a specific programme. Then ask: what data would help us make those decisions better? Build your measurement around that.

This is different from building a comprehensive dashboard that captures everything. Comprehensive dashboards are useful for operational monitoring. They are less useful for strategic decision-making because they present everything with equal weight. Strategic measurement requires hierarchy: the three to five metrics that tell you whether you are on track, with supporting data underneath.

When I grew an agency from a team of 20 to over 100 people and moved it from loss-making to a top-five position in its market, one of the disciplines that made the difference was ruthless clarity about which metrics actually mattered. We tracked a lot of things, but we made decisions based on a small number of commercial indicators that we trusted. Everything else was context.

For teams working through broader growth strategy questions, the Go-To-Market and Growth Strategy hub covers how measurement connects to planning, channel strategy, and commercial prioritisation across the full growth cycle.

There is also a useful parallel in how creator partnerships are being evaluated. Go-to-market campaigns with creators face the same measurement challenge as any brand activity: the outputs are visible, but the business outcomes require a longer view and a more honest attribution approach.

Honest Approximation Over False Precision

The marketing industry has a complicated relationship with numbers. On one hand, there is pressure to prove ROI with precision. On the other, the tools available to measure marketing impact are imperfect, the causal chains are long and complex, and the honest answer to “what did this campaign deliver?” is often “approximately this, with significant uncertainty.”

The temptation is to resolve this tension with false precision. Pick an attribution model that produces a clean number. Report that number with confidence. Move on. The problem is that false precision is worse than honest uncertainty, because it leads to decisions based on numbers that look reliable but are not.

The better approach is honest approximation. Be clear about what you can measure directly, what you are inferring, and what you are assuming. Present ranges rather than point estimates where the uncertainty is real. Acknowledge the limitations of your data. This is not a sign of weakness. It is a sign of commercial credibility, and it builds more trust with senior stakeholders than a dashboard full of confident numbers that do not hold up under scrutiny.

The CFOs and CEOs I have worked with over the years are not expecting marketing to produce perfect measurement. They are expecting marketing to be honest about what it knows, clear about what it is trying to achieve, and rigorous about the evidence it uses to make investment decisions. That is a standard most marketing teams can meet, if they are willing to prioritise honesty over optics.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What metrics should I use to measure marketing strategy?
Measure marketing strategy across three layers: activity metrics (clicks, impressions, cost per click), leading indicators (brand search volume, share of voice, new audience reach), and lagged business outcomes (revenue, customer acquisition cost, market share). Most teams only track the first layer. The third layer is what business leadership actually cares about, so your framework needs to connect activity to commercial outcomes, not just campaign performance.
How do you measure marketing strategy without perfect data?
You do not need perfect data to measure marketing strategy effectively. Honest approximation is more useful than false precision. Establish a baseline before you start, track a small number of metrics that connect to your business objective, and be transparent about what you are measuring directly versus inferring. Geographic split tests and holdout groups can give you directional evidence of incrementality without requiring perfect attribution.
Why does performance marketing overstate its contribution to growth?
Performance marketing is built around last-click or last-touch attribution, which credits the final interaction before a conversion. But many of those conversions would have happened anyway, because the customer was already in market and already close to buying. Performance marketing captures existing demand efficiently, but it does not always create new demand. If you measure only performance channels, you will systematically over-credit them and under-invest in the brand-building activity that creates future demand.
What is incrementality in marketing measurement?
Incrementality measures the additional business outcome that occurred because of a specific marketing activity, above what would have happened without it. It is the answer to the question: what would have happened if we had not run this campaign? Measuring incrementality requires a comparison, either a holdout group, a geographic split test, or a time-series analysis. It is the most honest way to evaluate whether marketing activity is creating value or simply coinciding with outcomes that were already likely.
How often should you review marketing strategy measurement?
Review activity metrics weekly or bi-weekly for operational optimisation. Review leading indicators monthly to track whether your strategy is building momentum. Review lagged business outcomes quarterly, with the understanding that some marketing activity takes 6 to 18 months to show up in commercial metrics. The cadence matters: reviewing brand investment on a 90-day cycle and declaring it ineffective is a category error. Match your review frequency to the type of marketing and the expected lag time of its impact.

Similar Posts