Marketing Metrics Are Lying to You. Here’s How to Read Them Properly

Marketing metrics and KPIs are only as useful as the questions you ask of them. A metric in isolation tells you almost nothing. A metric in context, compared against a baseline, measured over time, and connected to a business outcome, tells you something worth acting on. The difference between those two things is where most marketing measurement goes wrong.

Most teams are not short of data. They are short of the discipline to interpret it honestly, and the commercial grounding to know which numbers actually matter.

Key Takeaways

  • A metric without context is just a number. Baselines, trends, and business outcomes are what give metrics meaning.
  • Vanity metrics survive in dashboards because they are easy to improve and uncomfortable to remove. That does not make them useful.
  • The KPIs that get reported most often are rarely the ones most connected to revenue. Closing that gap is a strategic choice, not a technical one.
  • Most marketing teams measure what their tools surface by default, not what the business actually needs to know.
  • Good measurement is honest approximation. The goal is not perfect data. It is data you can make decisions from.

Why Most Marketing Dashboards Are Built Backwards

When I ran agencies, I noticed a pattern that repeated itself across clients in almost every sector. The dashboards were impressive. Dozens of metrics, colour-coded RAG statuses, weekly slide decks with charts trending upward. And yet, when you sat in the room with the finance director or the CEO and asked what the marketing function was actually contributing to the business, the answer was usually vague.

The problem was not the data. The problem was that the dashboards had been built around what the tools made easy to measure, not around what the business needed to understand. Google Analytics surfaces sessions and bounce rate by default. So those went on the dashboard. The paid media platform reported impressions and click-through rate. Those went on too. Nobody stopped to ask whether any of it connected to revenue.

This is how vanity metrics take root. Not through deliberate dishonesty, but through the path of least resistance. It is much easier to report on what the tool already shows you than to build measurement from a business question outward.

If you are building or rebuilding your approach to marketing analytics, the Marketing Analytics and GA4 hub on this site covers the broader measurement landscape, from attribution to GA4 configuration to the metrics frameworks that actually hold up under commercial scrutiny.

What Makes a KPI Worth Keeping

A KPI earns its place on a dashboard when it meets three conditions. First, it must be connected to a business outcome, not just a marketing activity. Second, it must be actionable: if it moves in the wrong direction, you need to know what to do about it. Third, it must be honest, meaning it should be hard to game and difficult to misread.

Most metrics fail at least one of these tests. Click-through rate is easy to improve by writing clickbait headlines. Impressions go up when you increase spend, regardless of whether anyone cares about what you are saying. Email open rates became unreliable once Apple introduced Mail Privacy Protection, and yet they still appear on marketing reports as though nothing changed.

The metrics that tend to hold up are the ones that require something real to happen. Conversion rate requires someone to take an action. Revenue per visitor requires both traffic and commercial intent. Customer acquisition cost requires you to account for what you actually spent. These are harder to inflate and harder to misread, which is exactly why they belong on your dashboard and many of the easier metrics do not.

Forrester has written clearly about the snake oil problem in marketing measurement, the tendency for vendors and agencies to dress up activity metrics as proof of business impact. It is worth reading if you want a sharp external perspective on why so much measurement theatre persists in the industry.

The Difference Between Metrics and KPIs

These two terms get used interchangeably, and that matters more than it sounds. A metric is any measurable data point: sessions, clicks, impressions, open rate, time on page. A KPI, a Key Performance Indicator, is a metric that has been elevated to strategic significance because it directly reflects progress toward a business goal.

The distinction matters because it changes how you treat the number. A metric you monitor. A KPI you are accountable for. When everything on a dashboard is treated as a KPI, nothing is. You end up in reporting meetings where every number gets discussed with equal weight, and the conversation never reaches the things that actually determine whether the business is growing.

In practice, most marketing functions should have no more than five or six true KPIs. Everything else is either a diagnostic metric, something you check when a KPI moves in the wrong direction, or a reporting metric, something that gets logged but does not drive decisions. The discipline of separating these three categories is one of the most commercially useful things a marketing leader can do, and almost nobody does it consistently.

Early in my career, long before I was running agencies, I asked the managing director of the company I worked for to approve budget for a new website. The answer was no. Rather than accepting that the project was dead, I taught myself to code and built it. The lesson I took from that was not about resourcefulness, although that helped. It was about knowing which outcome mattered, a better website that could generate enquiries, and finding a way to measure whether you had achieved it. The metric was not “website built.” It was enquiries from the website. Everything else was noise.

How to Choose the Right KPIs for Your Business

The right KPIs are not universal. They depend on your business model, your stage of growth, your sales cycle, and what decisions you actually need to make. A SaaS business with a freemium model needs to track trial-to-paid conversion rate and monthly recurring revenue contribution from marketing. An e-commerce retailer needs to watch return on ad spend, average order value, and repeat purchase rate. A professional services firm needs to understand cost per qualified lead and pipeline contribution.

The starting point is always the business goal, not the channel. If the business goal is to grow revenue by 20% this year, work backwards. What volume of customers does that require? What conversion rate does that assume? What does that mean for the number of qualified leads marketing needs to generate? What does that imply for traffic, reach, or audience growth? Only once you have that chain mapped out can you identify which metrics genuinely matter and which ones are just filling space on a slide.

Semrush has a useful breakdown of how data-driven marketing connects channel metrics to business outcomes, which is worth reading if you are in the early stages of building this framework. The principle is straightforward: every metric on your dashboard should have a line of sight to a business number.

When I was growing one of the agencies I ran from a team of around 20 to over 100 people, the temptation was to track everything. New business enquiries, proposal win rate, average contract value, client retention, revenue per head, utilisation. All of it was useful in different contexts. But the KPIs that the leadership team was accountable for were a much shorter list: net new revenue, client retention rate, and gross margin. Those three numbers told us whether the business was healthy. Everything else was diagnostic.

The Metrics That Look Good and Mean Nothing

Vanity metrics are not always obviously vanity. Some of them look credible until you examine what they actually tell you.

Social media follower count is the obvious one. A large following that does not engage, does not convert, and does not correlate with revenue growth is a number for the slide deck, not the boardroom. But there are subtler examples. Website traffic is one. Traffic that grows because you are attracting the wrong audience, people who bounce immediately and never return, is worse than flat traffic from a smaller, more relevant audience. Cost per click looks efficient until you realise your landing page converts at 0.4% and your competitor’s converts at 4%.

The reason vanity metrics persist is partly psychological and partly political. Psychologically, it feels good to report numbers that are going up. Politically, it is easier to defend activity than outcomes. If your KPI is impressions, you can always point to the media plan. If your KPI is revenue contribution, you are exposed in a way that requires real accountability.

I have sat in Effie Award judging sessions where entries were submitted with impressive-sounding metrics that, when you read carefully, had no connection to the business problem the campaign was supposed to solve. Brand awareness up 12%. Positive sentiment increased. Share of voice improved. And yet the brand’s market share was flat or declining. The metrics were real. The claim of effectiveness was not.

HubSpot’s writing on why marketing analytics differs from web analytics gets at this distinction clearly. Web analytics tells you what happened on your site. Marketing analytics tells you whether your marketing is working. Those are different questions, and conflating them is how dashboards fill up with data that does not drive decisions.

Building a Measurement Framework That Holds Up

A measurement framework is not a dashboard. A dashboard is the output. The framework is the logic that determines what goes on it, why, and how it connects to the business.

A working framework has four layers. The first is business KPIs: the numbers the board cares about, revenue, margin, customer growth, retention. The second is marketing KPIs: the metrics marketing is directly accountable for, pipeline generated, cost per acquisition, return on marketing investment. The third is channel metrics: the performance indicators for each individual channel, conversion rate, cost per click, email click-to-open rate. The fourth is diagnostic metrics: the numbers you check when something breaks, like landing page load time, ad frequency, or unsubscribe rate.

The discipline is in keeping these layers distinct. Channel metrics are not KPIs. Diagnostic metrics are not KPIs. When everything gets promoted to the top layer, the framework collapses into a list of numbers with no hierarchy and no accountability.

Forrester’s guidance on automating marketing dashboards is worth reading here, particularly the emphasis on designing dashboards around decisions rather than data availability. The question to ask before adding any metric to a report is: what decision does this enable? If the answer is unclear, the metric probably does not belong there.

For content-led businesses, Semrush has a solid breakdown of content marketing metrics that separates reach and engagement metrics from the downstream metrics that connect to pipeline and revenue. It is a useful reference for teams that are heavy on content production but light on measurement discipline.

The Role of Context in Making Metrics Meaningful

A conversion rate of 2.5% is not good or bad on its own. It depends on your industry, your price point, your traffic source, and what it was last month. Context is what transforms a number into an insight.

The three most important forms of context are: historical comparison, which tells you whether things are improving; benchmark comparison, which tells you how you compare to the market; and segmentation, which tells you which parts of the business or audience are driving the overall number.

Historical comparison is the most accessible and the most commonly misused. Month-on-month comparisons look impressive in a growth phase and alarming in a seasonal dip, even when neither reflects a genuine change in underlying performance. Year-on-year comparisons are more reliable for most businesses, but they can also mask structural shifts if you rely on them exclusively.

Segmentation is where most teams underinvest. An average conversion rate of 2.5% might be hiding a 6% conversion rate from organic search and a 0.8% rate from paid social. Treating those as a single number and optimising accordingly is how budgets get misallocated. The average obscures the signal.

When I was managing significant paid media budgets across multiple markets, the single most valuable habit we built was breaking every top-line metric into at least three cuts before drawing any conclusions: by channel, by geography, and by audience segment. Nine times out of ten, the aggregate number was misleading in some direction. The insight was always in the breakdown.

Email Metrics Deserve Special Mention

Email marketing sits in an interesting position in the metrics conversation. It is one of the oldest digital channels and one of the most measured, and yet email reporting has become significantly less reliable in recent years due to changes in how email clients handle tracking.

Open rate was the primary email KPI for most teams for a long time. Since Apple’s Mail Privacy Protection began pre-loading email content regardless of whether a recipient actually opened the message, open rate data for a significant portion of email lists has been inflated and unreliable. Many teams have not adjusted their reporting or their benchmarks to account for this.

The metrics that remain reliable for email are click rate, click-to-open rate on platforms where open data is more trustworthy, conversion rate from email traffic, and revenue per email sent for e-commerce. HubSpot’s guide to email marketing reporting covers how to structure email measurement in a way that focuses on downstream outcomes rather than top-of-funnel activity metrics.

The broader point is that measurement environments change. The metrics you trusted three years ago may not be reliable today, and the responsible thing is to audit your reporting regularly rather than assuming the numbers mean what they always meant.

When Good Metrics Produce Bad Decisions

Even well-chosen metrics can drive poor decisions if they are optimised in isolation. This is sometimes called Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. The metric stops reflecting the underlying reality because behaviour changes to hit the number rather than to achieve the outcome the number was supposed to represent.

Cost per acquisition is a good example. Optimising hard for CPA will often lead you to cut channels that are expensive in the short term but drive higher-value customers over time. You hit the CPA target. The business loses its best customers. The metric looked right. The decision was wrong.

The antidote is to measure outcomes at multiple time horizons simultaneously. Short-term CPA sits alongside 12-month customer value. Conversion rate sits alongside average order value and repeat purchase rate. No single metric gets optimised in isolation without reference to the others. This is harder to build and harder to report, but it is the difference between measurement that drives good decisions and measurement that drives the appearance of good performance.

There is more on how these frameworks connect to broader analytics strategy in the Marketing Analytics and GA4 hub, including how to configure GA4 to support multi-metric measurement rather than defaulting to the platform’s built-in reports.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the difference between a marketing metric and a KPI?
A metric is any measurable data point in your marketing activity, such as sessions, clicks, or impressions. A KPI is a metric that has been selected because it directly reflects progress toward a specific business goal. Not every metric is a KPI, and treating them as equivalent leads to dashboards full of data that do not drive decisions.
How many KPIs should a marketing team track?
Most marketing functions should have no more than five or six true KPIs. Beyond that, accountability becomes diffuse and the reporting conversation loses focus. Everything else should be categorised as either a diagnostic metric, checked when a KPI moves in the wrong direction, or a monitoring metric that gets logged but does not drive strategic decisions.
What are vanity metrics in marketing?
Vanity metrics are numbers that look positive but have weak or no connection to business outcomes. Common examples include social media follower counts, raw impressions, and website traffic that does not convert. They tend to persist in reporting because they are easy to improve and politically comfortable to present, not because they tell you whether marketing is working.
Why is context important when interpreting marketing metrics?
A metric without context is just a number. A 2.5% conversion rate could represent strong performance or a serious problem depending on your industry, traffic source, price point, and historical baseline. The three most useful forms of context are historical comparison, market benchmarking, and audience or channel segmentation. Aggregate numbers frequently hide the signals that matter most.
Are email open rates still a reliable marketing KPI?
Open rates have become significantly less reliable since Apple’s Mail Privacy Protection began pre-loading email content, inflating open data for a large portion of email lists. Teams that have not adjusted their email benchmarks since this change may be working from misleading numbers. Click rate, click-to-open rate where data is trustworthy, and conversion rate from email traffic are more dependable measures of email performance.

Similar Posts