Paid Advertising ROI: Stop Measuring the Wrong Things

Measuring ROI in paid advertising sounds straightforward until you try to do it properly. The honest answer is that most businesses are measuring something, but not necessarily the right thing, and the gap between those two positions costs real money.

ROI in paid advertising is the ratio of revenue generated to the cost of generating it, expressed across whatever time horizon and attribution model you choose. That last clause is where most measurement frameworks quietly fall apart.

Key Takeaways

  • Attribution models don’t reveal truth, they reveal a perspective. Last-click attribution is still the default for many teams, and it systematically overstates the value of bottom-funnel channels.
  • ROAS and ROI are not the same metric. Confusing them leads to scaling campaigns that look profitable on the surface but erode margin at volume.
  • Incrementality testing is the most honest way to measure paid advertising impact, and almost nobody does it consistently.
  • The measurement framework you choose should match your business model, not the default settings in your ad platform’s reporting dashboard.
  • Vanity metrics survive in paid advertising because they’re easy to report. Holding campaigns to a contribution margin standard is harder but far more commercially useful.

Early in my career, I ran a paid search campaign for a music festival at lastminute.com. The campaign was not sophisticated. The targeting was clean, the copy was direct, and the landing page did exactly what it needed to do. Within roughly a day, it had driven six figures of revenue. What made it work wasn’t the technology or the strategy deck. It was that we knew precisely what we were measuring, what a conversion was worth, and what we were willing to pay for it. That clarity is rarer than it should be.

Why Most Paid Advertising Measurement Is Structurally Broken

The measurement problem in paid advertising is not primarily a technical one. The tools exist. Google Ads, Meta Ads Manager, and any decent analytics platform will generate more data than most teams know what to do with. The problem is structural: what gets measured gets optimised, and what gets reported gets resourced. When those two things are misaligned with actual business outcomes, you end up optimising for the wrong signals at scale.

The most common structural flaw is last-click attribution. It assigns full conversion credit to the final touchpoint before a sale, which tends to reward branded search, retargeting, and direct traffic, while systematically undervaluing the channels that created awareness and intent in the first place. If you’re running a blended paid strategy across search, social, and display, last-click attribution will almost always tell you to cut the top-of-funnel spend. That advice is often wrong.

The second structural flaw is conflating ROAS with ROI. Return on ad spend measures revenue relative to ad cost. Return on investment measures profit relative to total investment. A campaign generating £5 for every £1 spent on ads looks excellent until you factor in cost of goods, fulfilment, customer service, and agency fees. I’ve sat in enough P&L reviews to know that a 5x ROAS can coexist with a loss-making channel. The number on the dashboard and the number on the income statement are two different things.

If you’re building or reviewing your broader paid strategy, the Paid Advertising hub covers the full landscape, from channel selection to creative to measurement frameworks, in one place.

What ROI in Paid Advertising Actually Measures

Before choosing a measurement approach, it helps to be precise about what you’re actually trying to answer. There are at least three distinct questions that often get collapsed into a single “ROI” conversation:

First, is this campaign generating revenue above its cost? This is the basic efficiency question. It’s important but insufficient on its own.

Second, is this campaign generating revenue that would not have occurred without it? This is the incrementality question. It’s harder to answer but far more commercially relevant. A retargeting campaign that shows ads to people who would have bought anyway has a very different true ROI than its reported ROAS suggests.

Third, is this campaign the best use of this budget? This is the opportunity cost question. A campaign with a 300% ROI might still be the wrong investment if another channel would have returned 500% with the same spend.

Most measurement frameworks answer the first question reasonably well, struggle with the second, and rarely attempt the third. That’s a significant gap when you’re making resource allocation decisions across a portfolio of channels.

Attribution Models and What They Actually Tell You

Attribution modelling is the practice of assigning credit for a conversion across the touchpoints that preceded it. The model you choose shapes every optimisation decision that follows, which makes it one of the highest-leverage decisions in paid advertising measurement.

Last-click attribution is still widely used despite its well-documented limitations. It’s simple, auditable, and easy to explain in a client presentation. Those are not good reasons to use it as your primary measurement lens. Unbounce has written usefully about how attribution choices affect PPC optimisation decisions, and the core point holds: the model you use determines which campaigns look efficient and which look wasteful.

Data-driven attribution, which Google Ads now defaults to, uses machine learning to distribute credit based on observed conversion patterns. It’s more sophisticated than last-click, but it operates as a black box. You can’t fully audit it, and it has a known tendency to favour Google-owned touchpoints. That’s not a conspiracy, it’s an incentive structure. Being aware of it matters.

Linear attribution distributes credit equally across all touchpoints. Time-decay attribution weights recent touchpoints more heavily. Position-based attribution gives the most credit to the first and last touch. Each model tells a different story about the same customer experience, which is worth sitting with for a moment. The customer experience did not change. The story you’re telling about it did.

When I was running iProspect and managing significant search budgets across multiple clients, one of the recurring conversations was about how the same campaign could look like a top performer under one attribution model and a budget drain under another. The clients who made the best decisions were the ones who understood that attribution was a lens, not a verdict.

Incrementality Testing: The Most Honest Measurement You’re Probably Not Doing

Incrementality testing asks a simple question: what would have happened if we hadn’t run this campaign? The answer to that question is the true measure of a campaign’s value.

The standard approach is a holdout test. You take a statistically comparable group of users or geographic markets, exclude them from your paid advertising, and compare conversion rates between the exposed and unexposed groups over the same period. The difference represents incremental lift. The revenue generated by that lift, divided by the cost of the campaign, is your true incremental ROI.

This is not a new methodology. Direct mail marketers were running holdout tests decades before digital advertising existed. What’s changed is that digital platforms have made it technically easier to run these tests while simultaneously making it easier to avoid running them, because the default reporting dashboards always show positive numbers. There’s no dashboard widget that says “this campaign claimed credit for revenue it didn’t create.”

Branded keyword campaigns are the most common example of inflated reported ROAS. If someone searches for your brand name and clicks your ad, they were almost certainly going to convert anyway. The ad captured demand that already existed. Running a holdout test on branded search typically reveals that the incremental value is a fraction of what the attribution model reports. That’s not a reason to turn off branded search entirely, there are competitive and real estate arguments for running it, but it is a reason to stop counting it as evidence that paid search is driving growth.

Meta’s Conversion Lift tool and Google’s Experiment features both support incrementality testing. Moz has covered how AI-assisted campaign management is changing how these experiments are structured, though the underlying logic of a controlled test remains the same regardless of how sophisticated the tooling becomes.

The Metrics That Actually Connect to Business Outcomes

Platform metrics like click-through rate, quality score, and cost per click are useful for diagnosing campaign mechanics. They are not business metrics. The gap between the two is where a lot of paid advertising reporting goes wrong.

The metrics that connect to business outcomes tend to sit further down the funnel and require more data to calculate. They include:

Cost per acquisition (CPA): The total cost of generating a single customer or conversion. This should be benchmarked against customer lifetime value, not just against the revenue from the first transaction. A £50 CPA looks expensive for a £60 product but reasonable if that customer spends £400 over two years.

Contribution margin per channel: Revenue minus variable costs, calculated at the channel level. This requires finance and marketing to share data, which is why most teams don’t do it. It’s also why most teams can’t answer with confidence whether their paid advertising is actually profitable.

New customer acquisition rate: What percentage of conversions attributed to paid advertising are genuinely new customers? Retargeting campaigns that convert existing customers at high ROAS are not growing the business. They’re monetising an audience that already existed.

Payback period: How long does it take to recover the cost of acquiring a customer through paid channels? For subscription or repeat-purchase businesses, this is often more useful than a point-in-time ROAS figure.

I’ve judged the Effie Awards, which is the closest thing marketing has to a peer-reviewed assessment of effectiveness. The campaigns that win consistently are the ones where the team can draw a clear line from the advertising activity to a measurable business outcome. Not a platform metric. Not an engagement rate. A business outcome. That standard is harder to meet than it sounds, but it’s the right standard.

How to Build a Measurement Framework That Holds Up

A measurement framework for paid advertising should be defined before the campaign launches, not retrofitted after the results come in. Retrofitting is how you end up with post-rationalised success stories that don’t survive contact with the next budget cycle.

Start with the business question. Not “what’s our ROAS target” but “what business outcome are we trying to move, and by how much, in what timeframe?” That question shapes everything that follows, including which metrics matter, which attribution model is appropriate, and what a credible result looks like.

Define your conversion events carefully. A conversion event should represent a meaningful business action, not just a trackable one. Page views, video completions, and add-to-cart events are trackable. They are not conversions in any commercially meaningful sense unless you can demonstrate they reliably predict revenue. Search Engine Journal’s coverage of how keyword matching affects campaign targeting is a useful reminder that the inputs to your campaigns, including how broadly or narrowly you’re targeting, directly affect the quality of the conversion data you’re collecting.

Set a primary metric and a guardrail metric. The primary metric is what you’re optimising for. The guardrail metric is what you’re protecting. For an ecommerce business, the primary metric might be contribution margin per order, and the guardrail might be return rate. Optimising purely for the primary metric without a guardrail is how you end up with campaigns that hit their targets while quietly destroying value elsewhere.

Build in a regular cadence for reviewing whether the framework itself is still fit for purpose. Business models change. Customer journeys change. The measurement framework that was appropriate twelve months ago may be systematically misleading you today. This is not a hypothetical. I’ve seen it happen in businesses that were otherwise well-run, where the measurement infrastructure had simply not kept pace with how the business had evolved.

The Role of Paid Social in a Multi-Channel Measurement Framework

Paid social sits in an awkward position in most measurement frameworks. It operates higher in the funnel than paid search, which means its contribution to revenue is harder to trace through standard attribution models. At the same time, the platforms themselves are increasingly incentivised to show you numbers that justify continued spend.

Semrush’s overview of paid social advertising covers the channel mechanics well. The measurement challenge is distinct from the channel mechanics. Paid social tends to create demand rather than capture it, which means its ROI often shows up in branded search volume, direct traffic, and organic conversion rates rather than in last-click attribution reports.

If you’re running paid social alongside paid search, the most honest way to assess its contribution is through media mix modelling or geo-based incrementality tests. Both are more resource-intensive than reading the Meta Ads Manager dashboard. Both are also more likely to give you an accurate picture of what the channel is actually contributing.

The broader question of how paid channels interact is one I’d encourage any serious practitioner to spend time on. Unbounce’s analysis of how paid search affects organic performance is a useful entry point into thinking about channel interactions, even if the specific dynamics have evolved since it was written. The underlying point, that paid and organic channels influence each other in ways that single-channel measurement misses, remains valid.

Across 30+ industries and hundreds of millions in managed ad spend, the pattern I’ve seen repeat most consistently is this: the businesses with the most sophisticated measurement frameworks are rarely the ones with the most sophisticated technology. They’re the ones where marketing and finance share a common language about what the numbers mean and what decisions they’re being used to make.

There’s more on how different paid channels fit together, and how to think about budget allocation across them, in the Paid Advertising section of The Marketing Juice. It’s worth reading alongside this if you’re building or reviewing a multi-channel measurement approach.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a good ROI for paid advertising?
There is no universal benchmark because ROI depends entirely on your margins, business model, and customer lifetime value. A 200% ROI might be excellent for a high-margin software product and loss-making for a low-margin retailer. The more useful question is whether your paid advertising is generating returns above your cost of capital, after accounting for all variable costs, not just ad spend.
What is the difference between ROAS and ROI in paid advertising?
ROAS measures revenue relative to ad spend only. ROI measures profit relative to total investment, including ad spend, agency fees, creative costs, and any other variable costs associated with the campaign. A campaign can show a strong ROAS while delivering a negative ROI once all costs are factored in. Using ROAS as a proxy for ROI is one of the most common measurement errors in paid advertising.
How does attribution modelling affect paid advertising ROI measurement?
Attribution models determine which touchpoints receive credit for a conversion, and different models produce materially different results from the same underlying data. Last-click attribution consistently overstates the value of bottom-funnel channels like branded search and retargeting, while undervaluing awareness and consideration channels. The model you use shapes every optimisation decision that follows, so choosing it deliberately rather than accepting the platform default matters.
What is incrementality testing and why does it matter for paid advertising?
Incrementality testing measures the revenue that would not have occurred without a specific campaign, by comparing conversion rates between an exposed group and a statistically comparable holdout group. It matters because standard attribution models count all conversions that followed an ad exposure, including conversions that would have happened anyway. Incrementality testing gives you the true causal contribution of your advertising rather than a correlation-based estimate.
How should I measure ROI for paid social advertising?
Paid social typically operates higher in the purchase funnel than paid search, which means its contribution to revenue rarely shows up clearly in last-click attribution reports. More reliable approaches include geo-based incrementality tests, media mix modelling, and tracking downstream effects on branded search volume and direct traffic. Reading the platform’s own reporting as the primary measure of paid social ROI will almost always overstate its direct contribution and obscure its true role in the customer experience.

Similar Posts