Marketing Metrics That Move the Business

The most important marketing metrics are the ones connected to a business outcome you can defend in a boardroom. Not impressions. Not engagement rate. Not sessions. Revenue contribution, pipeline quality, customer acquisition cost relative to lifetime value, and brand health indicators that lead commercial performance. Everything else is context, not conclusion.

The problem is not that marketers track the wrong things. It is that they track too many things without a clear hierarchy, and the metrics that get reported are often the ones that look good rather than the ones that tell the truth.

Key Takeaways

  • Metrics only matter in relation to a business objective. A metric without context is just a number with a label on it.
  • Vanity metrics persist because they are easy to produce and rarely challenged. Critical thinking is the antidote, not better dashboards.
  • The most commercially useful metrics sit at the intersection of marketing activity and revenue impact: CAC, LTV, contribution margin, and pipeline velocity.
  • Brand health metrics and demand generation metrics need to be read together. Neither tells the full story alone.
  • Most marketing teams are data-rich and insight-poor. The discipline is in knowing which numbers to act on, not which ones to collect.

Why Most Metric Conversations Start in the Wrong Place

When I joined an agency that was losing money and losing clients, one of the first things I did was sit in on the reporting calls. What I found was a ritual. The team would walk through a deck of metrics, the client would nod or ask a few surface-level questions, and everyone would leave feeling like something productive had happened. But nothing had. The numbers being reported were not connected to anything the client actually cared about commercially. They were connected to what the agency could measure easily and present confidently.

That is not a small agency problem. I have seen the same pattern in large enterprise marketing teams, in-house functions with six-figure analytics budgets, and at the Effie Awards where I have judged campaigns that won on effectiveness grounds but could not clearly articulate what moved the needle and why. The metric conversation almost always starts with “what can we track” rather than “what decision does this number need to support.”

If you want to build a metric framework that holds up under commercial scrutiny, you have to start with the business objective and work backwards. Not the other way around.

The Hierarchy That Most Marketing Teams Skip

There is a useful way to think about marketing metrics in three tiers: business metrics, marketing performance metrics, and channel or activity metrics. Most teams report heavily on the third tier and lightly on the first. That is backwards.

Business metrics are the ones the CFO cares about. Revenue, gross margin, customer acquisition cost, customer lifetime value, market share, and retention rate. These are not marketing metrics in the narrow sense, but they are the metrics that marketing has to connect to if it wants to be taken seriously as a commercial function.

Marketing performance metrics sit one level down. These are the leading indicators that predict business outcomes: pipeline volume and velocity, lead quality scores, conversion rates through the funnel, cost per acquisition by channel, and brand consideration among target audiences. These are the metrics a CMO should be living in.

Channel and activity metrics are the operational layer. Click-through rates, impressions, open rates, video views, social reach. These matter for optimisation decisions within a channel, but they should almost never appear in a board-level report without being connected upward to a marketing performance or business metric. When they do appear in isolation, it is usually because someone is filling space.

If you are building or rebuilding a measurement framework, the marketing analytics hub on The Marketing Juice covers the structural decisions behind that process in more depth, including how to approach attribution, tool selection, and data governance without overcomplicating things.

Customer Acquisition Cost: The Metric With the Most Abuse

Customer acquisition cost, or CAC, is one of the most cited metrics in marketing and one of the most frequently miscalculated. The number only means something if you are consistent about what goes into it. Does your CAC include agency fees? Creative production? The portion of the marketing team’s salary allocated to acquisition activity? In most cases, it does not. Which means the CAC being reported is a partial cost figure dressed up as a full one.

I spent a period working with a direct-to-consumer business that was proud of its CAC. The number looked healthy. But when we pulled apart what was actually included, the calculation excluded the cost of the creative team, the tech stack, and a significant portion of the brand spend that was clearly driving consideration. The real CAC was nearly double what was being reported. The business was not as efficient as it believed. It was just measuring selectively.

CAC only becomes useful when it is read alongside customer lifetime value. A high CAC is not a problem if the LTV justifies it. A low CAC is not a success if the customers you are acquiring churn quickly or have low average order values. The ratio between the two is what tells you whether your acquisition engine is actually working.

Forrester has written clearly about the snake oil problem in marketing measurement, and CAC is one of the areas where the gap between reported figures and commercial reality tends to be widest. It is worth reading if you want a sharp external perspective on where measurement goes wrong at the structural level.

Conversion Rate: Useful Signal, Terrible Headline Metric

Conversion rate is one of those metrics that can look like a win and mask a loss simultaneously. If you double your conversion rate but the traffic you are converting is lower quality, your revenue outcome may be flat or declining. If you improve conversion rate on a channel that represents 3% of your total volume, the commercial impact is negligible regardless of how impressive the percentage change looks.

The version of conversion rate that matters is conversion rate by traffic source, by audience segment, and by funnel stage, read in the context of volume and revenue per conversion. Not a single blended number across everything. When you blend conversion data across sources, you lose the signal. You end up optimising for an average that does not represent any real customer cohort.

Semrush has a useful breakdown of content marketing metrics that touches on this segmentation point, particularly around how conversion data from content channels needs to be interpreted differently from paid channel conversion data. The mechanics are different. The buyer intent is different. Treating them the same produces bad decisions.

Pipeline Velocity: The Metric B2B Teams Underuse

For B2B marketing teams, pipeline velocity is one of the most commercially grounded metrics available, and it is consistently underreported. Pipeline velocity measures how quickly revenue moves through your sales pipeline. It combines deal volume, average deal size, win rate, and sales cycle length into a single figure that tells you how much revenue your pipeline is generating per unit of time.

The reason this matters for marketing is that it connects marketing’s contribution directly to commercial outcomes. If marketing is generating high volumes of low-quality leads that stall in the pipeline, pipeline velocity will show that. If marketing is improving lead quality through better targeting or content, pipeline velocity will reflect it. It is a metric that forces marketing and sales to look at the same number and have the same conversation, which is where a lot of the dysfunction between those two functions tends to live.

When I was growing an agency from a team of 20 to over 100 people, one of the structural changes that made the most difference was getting the business development team and the marketing function aligned on pipeline metrics rather than separate vanity metrics. Marketing was chasing leads. BD was chasing revenue. Neither was looking at velocity. When we fixed that, the conversation quality changed entirely.

Brand Health Metrics: The Long Game Most Performance Teams Ignore

There is a version of performance marketing that is entirely focused on the bottom of the funnel and treats brand as someone else’s problem. I have run agencies that operated this way, and I have seen the ceiling it creates. When you optimise purely for short-term conversion, you tend to exhaust your existing demand pool rather than expand it. Growth slows. CAC rises. And you cannot easily explain why because the metrics you have been tracking do not capture what is happening at the top of the funnel.

Brand health metrics, specifically aided and unaided awareness, consideration, preference, and net promoter score among target segments, are leading indicators of future commercial performance. They do not show up in your GA4 dashboard. They require primary research or brand tracking tools. But they tell you things that no click-through rate or cost-per-acquisition figure can.

The challenge is connecting them to commercial outcomes in a way that satisfies a commercially minded leadership team. The most defensible approach is to track brand metrics alongside revenue and pipeline metrics over time and build a longitudinal view of how shifts in brand health precede shifts in commercial performance. It takes patience. But it is the kind of evidence that changes how leadership thinks about brand investment.

Forrester’s perspective on marketing reporting as a forward-looking discipline is relevant here. The argument for brand metrics is not that they are soft. It is that they are predictive in ways that trailing conversion metrics are not.

Return on Ad Spend: The Metric That Flatters Paid Channels

Return on ad spend, or ROAS, is the metric that paid media teams live by, and it has a structural problem. It measures the revenue attributed to ad spend, but attribution is not the same as causation. A customer who was already going to buy from you, who clicked a retargeting ad on the way to your site, will show up in your ROAS calculation as a conversion driven by paid media. They were not. They were a conversion that paid media claimed credit for.

This is not a new observation. But it is one that is consistently underweighted in how ROAS gets reported and acted on. The result is that businesses over-invest in retargeting and branded search, which have high ROAS figures because they are capturing existing intent, and under-invest in prospecting and brand channels, which create future demand but show up poorly in last-click or even multi-touch attribution models.

ROAS is a useful efficiency metric for optimising within a paid channel. It is not a reliable guide to where your marketing budget should go across channels. For that, you need incrementality testing or marketing mix modelling, both of which try to answer the harder question: what would have happened without this spend?

Unbounce has a thorough breakdown of content marketing metrics that illustrates how different channel types require different measurement logic. The same principle applies to paid media. ROAS is not wrong. It is just incomplete when used as a cross-channel decision tool.

Engagement Metrics: When They Matter and When They Are Noise

Engagement metrics, time on page, scroll depth, social shares, email open rates, video completion rates, have a place in a measurement framework. That place is as diagnostic tools for content and creative optimisation, not as headline performance indicators.

A high email open rate tells you your subject line worked. It does not tell you whether the email drove any commercial outcome. A high time-on-page figure tells you the content held attention. It does not tell you whether that attention translated into consideration, intent, or revenue. Engagement metrics need to be connected forward to a conversion or commercial outcome to be useful. On their own, they are just evidence that something happened, not that something mattered.

Buffer’s resource on content marketing metrics makes a useful distinction between consumption metrics, engagement metrics, and conversion metrics. That three-part framing is a clean way to think about where engagement data sits in the hierarchy. It is consumption and engagement that you optimise creative with. It is conversion that you report on commercially.

The discipline I try to apply, and that I have pushed teams to apply across 20 years of agency work, is to ask one question before adding any metric to a report: what decision does this number support? If the answer is unclear, the metric does not belong in the report. It might belong in an operational dashboard. It might belong in a creative review. But it should not be taking up space in a document that is meant to drive commercial decisions.

The Analytics Infrastructure Behind the Metrics

None of the metrics above mean anything if the data feeding them is unreliable. And in my experience, data reliability is a much bigger problem than most marketing teams acknowledge. Tracking breaks. UTM parameters get applied inconsistently. GA4 configurations get set up without a measurement plan and then never audited. Consent mode changes the data. And nobody notices until someone asks a question that the data cannot answer cleanly.

Moz has a useful piece on what to pay attention to in GA4 that covers some of the configuration decisions that affect data quality downstream. The GA4 transition has been a forcing function for better measurement hygiene in some teams and a source of significant data degradation in others, depending on how seriously the setup was taken.

The MarketingProfs piece on web analytics preparation is older but the core argument has not aged. If you do not define what you are measuring and why before you start collecting data, you will end up with a lot of data and very few answers. That is the situation most marketing teams are in. They have dashboards full of numbers and a persistent inability to answer the question their CFO is actually asking.

If you are working through the infrastructure decisions behind a measurement framework, the marketing analytics section of The Marketing Juice covers attribution models, GA4 setup, and the practical decisions behind building a measurement stack that holds up under commercial pressure.

Building a Metric Framework That Survives a CFO Question

The test I use for any marketing metric framework is simple. Could you walk into a room with your CFO, present these numbers, and answer every follow-up question they ask without reaching for a caveat? If the answer is no, the framework has a gap somewhere.

CFOs ask questions like: what did we get for that spend? How does this compare to what we would have got if we had invested elsewhere? What is the trend over 12 months? What is the confidence level in these figures? Most marketing reporting cannot answer those questions cleanly. Not because the data does not exist, but because the framework was not built with those questions in mind.

The MarketingProfs guide on building a marketing dashboard offers a structured approach to this problem that is still sound in its logic. The technology has changed. The principle has not. Start with the audience for the report, identify the decisions they need to make, and select metrics that support those decisions. Everything else is optional.

The junior marketers I have worked with who became genuinely strong commercial operators all had one thing in common. They learned early to question the metric before they reported it. Not to be difficult. But because they understood that a number without context is not insight. It is just noise with formatting. That habit of critical thinking, applied consistently to measurement, is what separates a marketing team that drives decisions from one that produces reports.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What are the most important marketing metrics for a B2B business?
For B2B, the metrics that carry the most commercial weight are pipeline velocity, cost per qualified lead, lead-to-opportunity conversion rate, average deal size, and customer lifetime value. These connect marketing activity directly to revenue outcomes and give both marketing and sales a shared language for evaluating performance. Vanity metrics like impressions and social reach have almost no relevance in a B2B context unless they can be connected to pipeline contribution.
How do you decide which marketing metrics to include in a board report?
Start with the decisions the board needs to make and work backwards. Board-level reports should focus on business metrics with marketing context: revenue contribution, customer acquisition cost, retention rates, and market share trends. Channel-level metrics like click-through rates and engagement rates belong in operational dashboards, not board packs. If a metric does not support a commercial decision, it should not be in the report.
Is ROAS a reliable metric for evaluating marketing performance?
ROAS is a useful efficiency metric within a paid channel but an unreliable guide to overall marketing performance. Its core limitation is that it measures attributed revenue, not incremental revenue. Channels with high ROAS figures, particularly retargeting and branded search, often capture existing demand rather than creating new demand. For cross-channel budget decisions, incrementality testing or marketing mix modelling gives a more accurate picture of what each channel is actually contributing.
What is the difference between a vanity metric and a performance metric?
A vanity metric looks good in a report but does not connect to a business outcome. Impressions, follower counts, and page views are common examples. A performance metric is one that either directly measures a commercial outcome or reliably predicts one. The distinction is not always about the metric itself but about how it is used. Conversion rate is a performance metric when read in the context of volume and revenue per conversion. It becomes a vanity metric when reported as a standalone figure without that context.
How often should marketing metrics be reviewed and reported?
The review cadence should match the decision cycle, not a calendar habit. Operational channel metrics, such as paid media performance and email metrics, benefit from weekly review because they inform active optimisation decisions. Marketing performance metrics like pipeline volume and CAC suit a monthly review cycle. Business-level metrics including revenue contribution and LTV are typically reviewed quarterly in the context of broader commercial performance. Reporting everything at the same frequency produces noise, not insight.

Similar Posts