Stop Tracking KPIs. Start Tracking Decisions.

Most marketing teams track too many KPIs and make too few decisions with them. The problem is not a lack of data. It is that KPIs have become a reporting ritual rather than a decision-making tool, and the two things are not the same.

If your weekly dashboard has 40 metrics and your team cannot tell you which three would change what you do next week, you are not tracking performance. You are performing the act of tracking performance. There is a meaningful difference.

Key Takeaways

  • KPIs only have value if they are connected to a specific decision. If no one can say what changes when a metric moves, it should not be on your dashboard.
  • Most marketing dashboards are reporting theatre. They communicate activity, not commercial progress.
  • Analytics tools give you a perspective on reality, not reality itself. Trends and directional movement matter more than precise numbers.
  • The right number of KPIs for most marketing teams is between three and five. Everything else is context or noise.
  • The question to ask of every metric is not “is this interesting?” but “would this change a decision?”

Why KPI Culture Became a Problem

KPIs were a good idea. A small number of key performance indicators, chosen carefully, tied to business outcomes. That was the original intent. What happened in most marketing teams is that the word “key” got quietly dropped. We ended up with performance indicators, plural, many of them only loosely connected to performance in any commercially meaningful sense.

I have sat in more quarterly business reviews than I can count where the deck ran to 60 slides and every metric was green. Revenue was flat. The client was unhappy. But the metrics were green. That is the clearest sign that something has gone wrong with how a team thinks about measurement.

The problem compounds when you add the layer of analytics tooling. GA4, Adobe Analytics, Search Console, email platforms, paid media dashboards. Each one produces its own set of numbers. Each one has its own attribution logic, its own session definitions, its own classification quirks. Pull them all into a single reporting view and you do not get clarity. You get competing perspectives on the same underlying reality, none of which is fully accurate.

I spent years managing large media accounts where the client’s CRM showed one conversion number, the platform showed another, and GA showed a third. All three were measuring the same campaign. None of them agreed. The instinct is to reconcile them into a single true number. The more honest response is to accept that you are working with approximations, and to focus on what the approximations are telling you directionally rather than obsessing over which one is correct.

If you want a broader grounding in how to think about analytics without getting lost in the tooling, the Marketing Analytics and GA4 hub covers the frameworks and practical approaches worth knowing.

What a KPI Is Actually For

A KPI is a decision trigger. That is its only legitimate function. If a metric moves and no one changes anything as a result, it was not a key performance indicator. It was a number on a slide.

The test I use is simple: for every metric on your dashboard, ask what decision it informs. Not “what does it tell us” in a general sense, but specifically: if this number goes up by 20%, what do we do differently? If it drops by 20%, what changes? If the honest answer is “we’d note it and move on”, it is not a KPI. It is a vanity metric with better branding.

This matters more than it sounds because attention is finite. When you have 40 metrics competing for a team’s analytical focus, the genuinely important signals get buried. The metrics that should be driving budget reallocation, channel mix decisions, or creative pivots end up sharing space with click-through rates and social reach figures that no one is going to act on regardless of what they show.

Good data-driven marketing is not about having more data. It is about having the right data, interpreted honestly, connected to action. Semrush’s breakdown of data-driven marketing makes this point clearly: the value is in the decisions the data enables, not in the volume of data itself.

The Dashboard Problem Nobody Talks About

Dashboards have become status objects. A well-designed dashboard with live data feeds and colour-coded metrics communicates sophistication. It signals that the team is on top of things. That signal is often false.

I have seen agencies spend weeks building elaborate reporting infrastructure for clients who never looked at it. The dashboard existed to demonstrate capability, not to inform decisions. The client wanted the comfort of knowing the data was being collected. The agency wanted the appearance of rigour. Neither party was actually using the numbers to change what they did.

The other dashboard problem is accuracy. Every analytics tool introduces distortion. Referrer data gets lost. Bot traffic inflates session counts. Cross-device journeys break attribution models. GA4’s event-based structure, while more flexible than Universal Analytics, introduces its own classification issues depending on how the implementation was set up. Moz’s GA4 preparation guide is worth reading precisely because it is honest about what the platform does well and where it falls short.

None of this means analytics tools are useless. It means you should treat them as perspectives rather than ground truth. When email open rates drop 15%, that is worth investigating. Whether the precise number is 15% or 12% or 18% matters less than the directional signal. HubSpot’s email reporting guide makes a similar point about the limits of open rate data, particularly since Apple’s Mail Privacy Protection changed how opens are recorded.

The teams that handle this well are the ones who have explicitly agreed on what their numbers mean and what they do not mean. They know which metrics are reliable and which are indicative. They do not pretend their conversion data is accurate to three decimal places when the attribution model has a 30% gap.

How Many KPIs Should a Marketing Team Actually Track?

Three to five. That is the honest answer for most marketing functions. Not three to five per channel. Three to five total, connected to the business outcomes the marketing team is accountable for.

This will feel wrong to people who have built careers around comprehensive measurement frameworks. But consider what happens in practice. When I was running an agency with 100 people and managing performance across multiple client accounts, the accounts that performed best were almost always the ones where the client had a clear, short list of what success looked like. Usually it was revenue, pipeline, or cost per acquisition. Sometimes just one of those.

The accounts that struggled were the ones where success was defined by a long list of channel metrics, none of which connected cleanly to the client’s P&L. We could show impressive numbers across 15 dimensions while the client’s business went sideways. That is not measurement. That is insulation from accountability.

The right structure is a small number of outcome metrics at the top, a slightly larger set of leading indicators that predict those outcomes, and a separate layer of diagnostic metrics that you only pull when something looks wrong. The diagnostic layer is not your dashboard. It is your investigation toolkit.

Mailchimp’s overview of core marketing metrics is a reasonable starting point for thinking about which metrics belong at which layer, though the instinct to track everything in the list is one worth resisting.

The Metrics That Get Dropped When You Apply This Filter

When you apply the “does this change a decision?” filter rigorously, a lot of commonly tracked metrics do not survive.

Social media follower counts rarely inform any decision worth making. If your follower count grows 10% in a quarter, what changes? For most brands, nothing. It is a number that feels like progress because it goes up, not because it connects to anything commercial.

Bounce rate is another one. In Universal Analytics it was a blunt instrument at best. In GA4 the concept has been replaced by engagement rate, which is marginally more useful but still rarely decision-relevant at the aggregate level. If your site-wide bounce rate moves, you cannot act on that without drilling into which pages, which traffic sources, which devices. By the time you have done that analysis, you are no longer looking at the headline metric at all.

Impressions. Share of voice. Brand search volume. These are all potentially interesting as contextual data. They are not KPIs unless you have a specific decision that hinges on them. If you are running a brand awareness campaign with a budget committed to it, share of voice might be a legitimate outcome metric. If you are just tracking it because the tool makes it easy to pull, it is noise.

The uncomfortable truth is that most of the metrics that get dropped when you apply this filter are the ones that make marketing look busy and productive regardless of whether it is generating commercial value. That is not a coincidence. Vanity metrics persist because they serve a purpose, just not the purpose they are supposed to serve.

What to Do With GA4 Custom Events Instead of More KPIs

One of the more useful things GA4 enables is precise event tracking tied to the specific actions that matter for your business. The default implementation gives you a lot of data. Most of it is not particularly useful. The value comes from configuring custom events around the behaviours that actually predict conversion or retention.

For a SaaS product, that might be trial activations, feature adoption events, or the specific in-app actions that correlate with paid conversion. Moz’s guide to GA4 custom event tracking for SaaS is one of the better practical resources on this, because it starts from business outcomes rather than from what the platform makes easy to track.

For an ecommerce business, the relevant events are usually narrower than people think. Add to cart, checkout initiation, purchase. The micro-interactions before those steps are diagnostic data, not KPIs. Track them because they help you understand drop-off. Do not put them on your primary dashboard as if they represent progress toward a business goal.

The broader point is that GA4’s flexibility is an invitation to build measurement that fits your business model, not a reason to track everything the platform supports. Most GA4 implementations I have seen are either under-configured (relying entirely on default events that do not map to business outcomes) or over-configured (tracking every possible interaction with no clear sense of what the data will be used for). Neither is useful.

If you are running A/B tests through GA4, the same principle applies. Semrush’s guide to A/B testing in GA4 covers the mechanics well, but the more important question before you run any test is what decision you will make based on the result. If the answer is not clear before the test starts, the test is not ready to run.

The Honest Approximation Principle

Marketing does not need perfect measurement. It needs honest approximation.

I have judged the Effie Awards, which are specifically about marketing effectiveness and the evidence behind it. One of the things that becomes clear when you read a lot of effectiveness cases is that the best work is rarely supported by perfect attribution data. It is supported by a coherent story about what changed in the business and a reasonable argument for why the marketing contributed to that change. The measurement is directional and honest about its limits, not precise and falsely confident.

That is a more useful model than what most teams operate with. The instinct to add more tracking, more tools, more metrics is partly a response to genuine uncertainty about what is working. But more data does not reduce that uncertainty if the data is poorly interpreted or if no one has agreed on what they are going to do with it.

The teams I have seen handle measurement well share a few common characteristics. They have a short list of metrics they genuinely trust. They are explicit about what their analytics tools do and do not capture. They distinguish between reporting (communicating what happened) and analysis (understanding why it happened and what to do next). And they are willing to say “we don’t know” when they do not know, rather than filling the gap with a metric that sounds relevant but does not actually answer the question.

Conversion tracking has been a contested space for a long time. Search Engine Land’s coverage of conversion tracking evolution is a useful reminder that the tools have always been imperfect and the industry has always had to work with incomplete data. That is not new. What changes is how honestly teams acknowledge it.

If you want to go deeper on the analytics layer, including how to structure reporting that actually informs decisions rather than just documents activity, the Marketing Analytics and GA4 hub covers the full range of frameworks and practical approaches.

A Practical Approach to Cutting Your KPI List

If you want to apply this in practice, start with your current dashboard and put every metric through three questions. First: does this connect to a business outcome we are accountable for? Second: if this metric moved significantly in either direction, would we change anything? Third: is this metric reliable enough to act on, or is it too distorted by tracking gaps and attribution issues to trust?

Anything that fails any of those three questions moves off the primary dashboard. It does not disappear. It goes into a secondary layer that you access when you need to diagnose a problem, not one you review every week as a matter of course.

What you should be left with is a small number of metrics that are directly connected to commercial outcomes, that your team has agreed to act on, and that are measured consistently enough to be directionally reliable even if they are not perfectly precise. That is a KPI framework. Everything else is data collection.

The early-career version of me, the one who taught himself to code because the MD said no to a website budget, would have found this counterintuitive. More capability always felt like more value. More metrics felt like more rigour. It took years of watching well-instrumented campaigns fail to produce commercial results before the simpler principle became obvious: measure what matters, act on what you measure, and be honest about the rest.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

How many KPIs should a marketing team track?
Most marketing teams should track three to five KPIs connected directly to business outcomes. Channel-level metrics and diagnostic data can sit in a secondary layer, accessed when something needs investigating rather than reviewed as a matter of routine.
What is the difference between a KPI and a vanity metric?
A KPI informs a specific decision. If a metric moves and no one changes anything as a result, it is a vanity metric regardless of what it is called. The test is simple: if this number goes up or down significantly, what do we do differently?
Why does GA4 data often disagree with other analytics tools?
GA4, like all analytics platforms, applies its own session definitions, attribution logic, and event classification rules. When compared with CRM data, paid media platforms, or other analytics tools, discrepancies are normal. The tools are measuring the same activity through different lenses, not reporting objective ground truth. Directional trends are more reliable than precise numbers.
What should a marketing dashboard actually include?
A primary dashboard should show the three to five outcome metrics the team is accountable for, plus a small number of leading indicators that predict those outcomes. Diagnostic metrics, channel-level breakdowns, and secondary data should live in a separate layer that teams access when they need to understand why something changed, not as part of regular reporting.
Is it possible to measure marketing effectiveness without perfect attribution?
Yes, and most effective marketing is measured this way. The goal is honest approximation: a coherent directional picture of what changed in the business and a reasonable argument for marketing’s contribution to that change. Perfect attribution is not achievable in most real-world conditions. Teams that acknowledge this and work with reliable approximations tend to make better decisions than those who pursue false precision.

Similar Posts