Performance Marketing KPIs That Reflect Business Health

Performance marketing KPIs are the metrics you use to measure whether paid activity is generating real commercial return, not just clicks and impressions. The best ones connect spend to revenue, margin, or customer value in a way that holds up to scrutiny from a CFO, not just a media planner.

Most teams track too many of the wrong ones and too few of the right ones. The result is dashboards full of activity data dressed up as performance data, and a slow erosion of confidence in what paid advertising is actually doing for the business.

Key Takeaways

  • Most performance marketing dashboards measure activity, not commercial impact. The gap between the two is where budget gets wasted.
  • ROAS is a useful efficiency signal but a poor proxy for profitability. Margin-adjusted return on ad spend tells a more honest story.
  • Customer acquisition cost only makes sense in the context of customer lifetime value. Without LTV, CAC is just a number without a benchmark.
  • Click-through rate and impression share are diagnostic metrics, not performance metrics. Treat them accordingly.
  • The most dangerous KPI is one that looks good but has no relationship to revenue. Optimising for it will cost you money.

I spent several years managing large paid media budgets across dozens of categories, and the single most common problem I saw was not poor targeting or weak creative. It was measurement that rewarded the wrong behaviour. Teams were hitting their numbers and losing money at the same time, because the numbers they were hitting had no bearing on what the business actually needed.

Why Most KPI Frameworks Miss the Point

There is a version of performance marketing measurement that looks rigorous from the outside: weekly reporting decks, colour-coded dashboards, trend lines going up and to the right. And then there is measurement that actually tells you whether the money is working. These two things are not the same, and confusing them is expensive.

The problem usually starts with platform metrics. Google, Meta, and every other ad platform have a vested interest in showing you data that makes their channel look good. Attribution windows are set generously. Conversions are counted in ways that flatter the platform. View-through attribution gets bundled in with click-through attribution as if they are equivalent. None of this is necessarily dishonest, but it is optimistic, and optimistic measurement compounds over time into serious budget misallocation.

When I was running an agency, I had a client in retail who was convinced their paid social was performing strongly because the platform dashboard showed a return on ad spend above their target. When we pulled the actual revenue data from their CRM and matched it back to media spend, the real number was about half what the platform was claiming. The difference was attribution overlap: the same customer was being counted by paid search, paid social, and email simultaneously. Three channels, one sale, three claims of credit. If you are building KPI frameworks on top of that kind of data, you are building on sand.

If you want a broader grounding in how paid channels fit together before getting into the metrics, the paid advertising hub covers the strategic and operational landscape in more depth.

Which KPIs Actually Matter in Performance Marketing?

The answer depends on your business model, your margin structure, and your growth stage. But there is a hierarchy that holds across most situations, and it starts with revenue, not clicks.

Return on Ad Spend: Useful but Incomplete

Return on ad spend, or ROAS, is the ratio of revenue generated to media spend. If you spend £10,000 and generate £40,000 in revenue, your ROAS is 4x. It is a clean, communicable number, which is probably why it became the default performance metric for so many teams.

The limitation is that ROAS ignores margin. A 4x ROAS on a product with 20% gross margin is a loss-making campaign. A 2x ROAS on a product with 70% gross margin might be highly profitable. If your KPI framework does not account for the economics of what you are selling, you can hit your ROAS targets and still destroy value.

The better version is margin-adjusted ROAS, sometimes called MER or marketing efficiency ratio. This takes gross margin into account so that the return figure reflects profit contribution rather than just top-line revenue. It is more complex to calculate, especially across a mixed product catalogue, but it is far more useful as a decision-making tool.

For businesses with variable margin across their product range, I would always recommend building ROAS targets by product category rather than applying a single blended target across everything. A single number hides too much variation to be actionable.

Customer Acquisition Cost and Why It Needs a Companion Metric

Customer acquisition cost, or CAC, is the total media spend divided by the number of new customers acquired. It is a sensible metric in principle, but it is almost meaningless without customer lifetime value sitting next to it.

A CAC of £80 is excellent if your average customer spends £600 over their lifetime with you. It is catastrophic if they spend £60 once and never return. I have seen both situations presented with the same level of confidence, because the team had the CAC number but had never seriously modelled the LTV side of the equation.

The LTV:CAC ratio is the metric that actually tells you whether your acquisition economics are sustainable. Most growth-stage businesses target a ratio of 3:1 or higher, meaning the lifetime value of a customer is at least three times what it cost to acquire them. Below that, you are likely subsidising growth in a way that will eventually catch up with you.

One of the things I noticed when judging the Effie Awards was how rarely entrants presented LTV data alongside acquisition metrics. Campaigns would show strong CAC numbers and impressive acquisition volumes, but the commercial sustainability question was left unanswered. It is a gap that matters far more in the real world than it does in award submissions.

Building a reliable LTV model requires decent CRM data. If your customer data is fragmented or incomplete, that is worth fixing before you invest heavily in acquisition. Getting your CRM infrastructure right is not glamorous work, but it is the foundation that makes LTV:CAC a real metric rather than an estimate.

Cost Per Acquisition: Where the Detail Lives

Cost per acquisition, or CPA, is similar to CAC but typically applied at the campaign or channel level rather than the business level. It measures what you paid, on average, to generate a specific conversion action, whether that is a purchase, a lead, a sign-up, or a trial.

CPA is most useful when you have defined what a conversion is worth. Without a target CPA grounded in your unit economics, you are optimising for efficiency without knowing whether efficient is good enough. The target CPA should be derived from your margin and LTV data, not set arbitrarily or borrowed from industry benchmarks.

The other thing worth watching with CPA is what happens when you push for lower numbers. Reducing CPA by narrowing targeting or cutting bids in competitive auctions often comes at the cost of volume. At some point, the marginal customer you are acquiring at a lower CPA is worth less than the customers you stopped reaching by tightening your parameters. The relationship between CPA and volume is not linear, and optimising for one without watching the other is a common mistake.

Conversion Rate: The Metric That Indicts the Whole Funnel

Conversion rate is the percentage of people who take a desired action after clicking on your ad. It is one of the few metrics in performance marketing that tells you something meaningful about what happens after the click, not just before it.

A low conversion rate with strong click volume usually means one of three things: the audience is wrong, the landing page is wrong, or the offer is wrong. Any of those can sink an otherwise well-structured campaign. I have seen campaigns with excellent click-through rates and terrible conversion rates, because the ad promised something the landing page did not deliver. The click was earned; the sale was not.

Conversion rate optimisation is often treated as a separate discipline from paid media, but in practice they are inseparable. The paid team controls what happens before the click. What happens after it is equally important to the commercial outcome, and if you are not measuring and improving it, you are leaving money on the table regardless of how well the media is performing.

It is also worth noting that conversion rate is highly sensitive to traffic quality. If you expand targeting to reach a broader audience, your conversion rate will typically fall even if the absolute number of conversions increases. That is not necessarily a problem, but it can look alarming if you are watching the rate without watching the volume and the revenue alongside it. Landing page quality is often the hidden constraint in campaigns that appear to be underperforming at the media level.

The Metrics That Belong in Diagnostics, Not Dashboards

Click-through rate, impression share, quality score, engagement rate: these are all useful signals when you are diagnosing a problem, but they are not performance metrics. Treating them as primary KPIs is one of the most common ways performance marketing teams end up optimising for the wrong things.

A high click-through rate is a good sign that your creative is resonating with the audience seeing it. It tells you nothing about whether those clicks are converting, whether the customers acquired are profitable, or whether the campaign is contributing to business growth. CTR belongs in the diagnostic layer of your measurement framework, not the performance layer.

The same is true of impression share. Knowing that you are capturing 60% of available impressions for a given keyword is useful context, but it only matters if those impressions are translating into commercial outcomes. I have seen teams fight hard to increase impression share on keywords that were generating clicks but no revenue. The metric looked good. The campaign was not working.

The discipline is to keep diagnostic metrics in the diagnostic layer and commercial metrics at the top of your reporting. When you present to a finance director or a board, they should be seeing revenue, margin contribution, CAC, and LTV:CAC. The CTR and quality score data is for the team running the campaigns, not for the conversation about whether the investment is justified.

Incrementality: The Question Most Teams Are Not Asking

Incrementality is the measure of how much of your attributed revenue would have happened anyway without the advertising. It is the most important question in performance marketing and the one that gets asked least often.

When I was at lastminute.com, I ran a paid search campaign for a music festival that generated six figures of revenue within roughly a day. It was a clean, simple campaign, and the numbers were immediately compelling. But the honest question, which we did ask, was: how many of those people would have found and bought the tickets anyway? They were searching for the festival by name. They were high-intent, brand-aware customers who were already in the market. The campaign made it easy for them to convert, but how much of that revenue was genuinely incremental?

That question is harder to answer than it sounds, and most attribution models do not attempt it. Last-click attribution assigns full credit to the final touchpoint. Even more sophisticated multi-touch models distribute credit across the funnel without testing whether any of those touchpoints actually changed the outcome. Incrementality testing, typically through holdout experiments where a matched group of users is not shown the ads, is the closest thing to a real answer. It is more work, but it is the work that tells you whether you are creating demand or just capturing it.

If businesses could retrospectively measure the true incremental impact of their paid activity, I think the results would be uncomfortable for a lot of teams. A significant portion of what gets attributed to performance marketing is demand that existed independently of the advertising. That does not mean the advertising is worthless, but it does mean the headline ROAS numbers are often overstated, sometimes substantially.

Understanding how demand generation works at a structural level helps clarify which campaigns are likely to be genuinely incremental and which are mostly capturing existing intent. The distinction matters enormously for how you allocate budget across channels.

How to Build a KPI Framework That Holds Up

A functional performance marketing KPI framework has three layers. The commercial layer sits at the top and contains the metrics that connect directly to business outcomes: revenue, margin contribution, LTV:CAC ratio, and new customer growth. These are the numbers that matter to the business and should be the primary lens for evaluating whether paid activity is working.

The channel layer sits in the middle and contains the metrics that help you understand how individual channels and campaigns are performing: ROAS by channel, CPA by campaign, conversion rate by audience segment. These are the numbers that inform budget allocation and optimisation decisions. They should be directionally consistent with the commercial layer, and when they are not, that is a signal worth investigating.

The diagnostic layer sits at the bottom and contains the operational metrics: CTR, quality score, impression share, frequency, relevance scores. These are the numbers that help you understand why something is or is not working at the execution level. They are useful for troubleshooting and for giving the team running the campaigns enough visibility to do their jobs well. They should not be the primary basis for reporting upward.

The most common mistake I see is frameworks that are heavy at the bottom and thin at the top. Teams have extensive diagnostic data and very little commercial data. The result is confident reporting on things that do not matter very much and silence on the things that do.

Retargeting campaigns are a good example of where this plays out. Retargeting typically shows strong ROAS because it targets people who were already close to converting. The diagnostic metrics look excellent. But the incrementality question is almost never asked: would those people have converted anyway? If the answer is largely yes, the retargeting budget is mostly being spent on people who did not need to be advertised to. The framework needs to be able to surface that question, not just report the headline numbers.

For more on how these measurement principles apply across different paid channels and formats, the paid advertising section of The Marketing Juice covers channel-specific strategy and the commercial thinking that should sit behind it.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the most important KPI in performance marketing?
There is no single answer, because it depends on your business model and growth stage. For most businesses, the LTV:CAC ratio is the most commercially meaningful metric because it tells you whether your acquisition economics are sustainable over time. ROAS is widely used but incomplete without margin data sitting alongside it.
What is the difference between ROAS and return on ad spend?
They are the same thing. ROAS stands for return on ad spend and is calculated by dividing revenue generated by media spend. A ROAS of 4x means you generated four pounds of revenue for every pound spent on advertising. The limitation is that it measures revenue, not profit, so it needs to be interpreted alongside margin data to be commercially useful.
How do you set a target CPA for a performance marketing campaign?
Target CPA should be derived from your unit economics, not borrowed from industry benchmarks. Start with your average order value or customer lifetime value, apply your gross margin, and work backwards to the maximum you can afford to pay for an acquisition while remaining profitable. That number is your ceiling. Your actual target should sit below it to leave room for margin contribution after other costs.
Why is incrementality testing important in performance marketing?
Incrementality testing tells you how much of your attributed revenue would have happened without the advertising. Standard attribution models, including last-click and most multi-touch models, do not answer this question. They distribute credit across touchpoints but do not test whether those touchpoints actually changed the outcome. Without incrementality data, it is easy to overstate the value of campaigns that are mostly capturing existing demand rather than creating new demand.
What metrics should be excluded from a senior performance marketing report?
Click-through rate, impression share, quality score, and engagement rate are diagnostic metrics that belong in operational reporting for the team running campaigns, not in senior or board-level performance reports. These metrics describe how campaigns are behaving at the execution level, not whether the investment is generating commercial return. Senior reports should focus on revenue, margin contribution, customer acquisition cost, and lifetime value data.

Similar Posts