Paid Advertising ROI: Stop Measuring the Wrong Things

Measuring ROI in paid advertising sounds straightforward until you actually try to do it. The core calculation is simple enough: what did you spend, what did you get back. The problem is that most businesses measure the easy things rather than the right things, and the gap between those two creates a false picture of what is and is not working.

Getting paid advertising ROI right means agreeing on what counts as a return, building attribution that reflects how customers actually behave, and being honest about what your measurement tools can and cannot tell you. Most teams skip at least one of those steps.

Key Takeaways

  • Last-click attribution is still the default in most accounts, and it systematically overvalues bottom-funnel channels while starving upper-funnel spend that was doing the real work.
  • Platform-reported ROAS and your actual business margin are two different numbers. Conflating them is one of the most expensive mistakes in paid media.
  • Incrementality testing, not attribution modelling, is the most honest way to understand whether your paid spend is driving growth or just taking credit for it.
  • A high CTR and a low CPA can still produce a campaign that loses money if the lifetime value of the customers acquired does not justify the acquisition cost.
  • The goal of measurement is honest approximation, not false precision. Chasing a perfect number often produces a confidently wrong one.

Why Most Paid Advertising ROI Calculations Are Wrong Before You Start

The first mistake happens before a single campaign goes live. Teams define ROI using the metrics that are easiest to pull from a dashboard rather than the metrics that actually reflect business performance. Cost per click, click-through rate, cost per acquisition: these are inputs to a judgment, not the judgment itself.

Early in my career running paid search at scale, I watched a client celebrate a campaign that had delivered a cost per acquisition well below target. The number looked excellent on paper. What the dashboard did not show was that the customers acquired through that campaign had a 60-day return rate three times higher than the baseline. The CPA was low because the quality was low. We had optimised ourselves into a loss.

That experience shaped how I think about measurement. A metric is only useful if it connects to something that matters commercially. CPA matters if you know the margin on what you are selling and the expected lifetime value of the customer. Without those two numbers, CPA is just an input with no context.

Before you build any measurement framework, define what a return actually means for your business. Revenue is not the same as profit. Profit is not the same as long-term customer value. The further upstream your definition of return sits from actual business performance, the more likely you are to optimise toward the wrong outcome.

If you are working through the fundamentals of how paid channels fit into a broader acquisition strategy, the paid advertising hub covers the channel landscape in more depth.

The Attribution Problem Nobody Talks About Honestly

Attribution is where paid advertising measurement gets genuinely difficult, and where the industry has done a poor job of being straight with clients and marketers.

Last-click attribution, which assigns 100% of the credit for a conversion to the final touchpoint before purchase, remains the default in a significant number of accounts. It is also a fiction. Customers rarely convert after a single interaction with a single channel. They search, they see a display ad, they come back through social, they search again, they convert. Last-click says the final search term did all the work. Every other touchpoint gets nothing.

The practical consequence is that channels doing awareness and consideration work look terrible in last-click attribution, while branded search and retargeting look extraordinary. Budgets shift accordingly. Teams cut the channels that were building demand and double down on the channels that were harvesting it. Growth stalls, and nobody can work out why.

When I was growing the performance division at iProspect, we spent a lot of time educating clients on this exact dynamic. The clients who understood it gave us the latitude to invest in upper-funnel activity. The ones who did not kept cutting anything that did not show a direct last-click return. The difference in growth trajectories between those two groups was not subtle.

Multi-touch attribution models are better than last-click, but they are still models. They make assumptions about how credit should be distributed, and those assumptions are rarely validated against actual business outcomes. Data-driven attribution, which Google has pushed heavily in recent years, uses machine learning to assign credit based on observed conversion paths. It is more sophisticated, but it is still working within the data that the platform can see, which is an increasingly incomplete picture as third-party cookies disappear and cross-device behaviour becomes harder to track.

The honest position is that attribution models give you a perspective on reality, not reality itself. They are useful for directional decision-making. They are dangerous when treated as ground truth.

Incrementality Testing: The Measurement Approach That Actually Tells You Something

If attribution is a model, incrementality testing is an experiment. And experiments are more trustworthy than models.

An incrementality test asks a simple question: what would have happened if we had not run this campaign? To answer it, you hold back a control group from seeing your ads and compare their behaviour against the group that was exposed. The difference in conversion rate between the two groups is your incremental lift. That is the return your campaign actually generated, as opposed to the return it took credit for.

This matters because a meaningful portion of the conversions attributed to paid advertising would have happened anyway. Customers who were already intent on buying your product will search for your brand, click your branded search ad, and convert. Your attribution model credits the campaign. Your incrementality test reveals that the campaign did not change their behaviour at all.

Google’s campaign experiments tool has made this kind of testing more accessible for search campaigns. Search Engine Land covered the original launch of the experiments framework, and the functionality has developed considerably since then. The principle, running a controlled test against a holdout group, remains the most reliable way to measure true campaign impact.

Incrementality testing is not without limitations. It requires sufficient volume to produce statistically meaningful results. It can be difficult to implement cleanly in channels where audience targeting is less precise. And it tells you about the past, not the future. But as a check on what your attribution model is telling you, it is invaluable. If your attribution model says a campaign drove 500 conversions and your incrementality test says it drove 150, that is a significant finding that should change how you allocate budget.

Platform ROAS vs. Business Margin: A Distinction That Costs Companies Real Money

Platform-reported return on ad spend is one of the most widely cited and most frequently misunderstood metrics in paid advertising.

ROAS as reported by Google, Meta, or any other platform is calculated as revenue divided by ad spend. A ROAS of 4 means you generated four pounds or dollars in revenue for every one you spent on ads. That sounds like a useful number. The problem is that revenue is not the same as profit, and ad spend is not the same as total marketing cost.

Consider a simple example. You spend £10,000 on paid search and generate £40,000 in revenue. Your platform-reported ROAS is 4x. But the products you sold carry a 30% gross margin, so your gross profit is £12,000. Your agency management fee is £2,000 per month. Your actual return on total marketing investment is considerably thinner than the headline ROAS suggested, and if your product return rate is 20%, you may be looking at a campaign that barely breaks even.

I have sat in client meetings where the marketing team was celebrating a ROAS of 6x while the finance director was looking at the same period’s P&L with a very different expression. The two sets of numbers were not contradicting each other. They were measuring different things. The marketing team had not done the translation work to connect platform metrics to business outcomes.

The metric you want is marketing efficiency ratio or, more precisely, profit on ad spend (POAS). This takes your actual margin into account. It requires more work to calculate because it needs data from your finance system, not just your ad platform. But it is the number that tells you whether your paid advertising is creating business value.

For e-commerce businesses in particular, building a more rigorous approach to Google Ads ROI means connecting ad platform data to actual margin data, not just revenue data. That integration is rarely automatic and usually requires some deliberate work to set up correctly.

Customer Lifetime Value and Why Acquisition Cost Alone Misses the Point

Cost per acquisition is a useful efficiency metric. It becomes a misleading one the moment you use it as a standalone measure of campaign success.

A £20 CPA looks better than a £50 CPA in a spreadsheet. But if the customers acquired at £20 buy once and never return, while the customers acquired at £50 go on to purchase four more times over the following year, the £50 CPA campaign is the better business decision by a significant margin.

When I launched a paid search campaign for a music festival at lastminute.com, we generated six figures of revenue within roughly 24 hours from a relatively lean campaign setup. The CPA looked excellent. But the real commercial question was not how cheaply we had acquired those customers. It was whether those customers would come back for the next event, the next trip, the next booking. The acquisition cost was just the entry price. The value of the relationship was what determined whether the campaign had been commercially sound.

Building LTV into your paid advertising ROI framework requires data that most marketing teams do not own directly. You need purchase history, repeat rate, average order value over time, and ideally churn data. That typically means working with the finance or data team rather than pulling numbers from an ad platform.

The practical starting point is to segment your customers by acquisition channel and compare their downstream behaviour. Do customers acquired through paid search have a different 12-month purchase pattern than customers acquired through paid social? If yes, that difference should be reflected in the CPA targets you set for each channel. A channel that acquires higher-value customers can justify a higher CPA. A channel that acquires one-time buyers needs a much lower one.

Paid social reporting tools that connect acquisition data to longer-term customer behaviour, such as Sprout Social’s paid social analytics, can help surface these patterns without requiring a full data warehouse build. The point is not which tool you use. It is that you are asking the question at all.

Setting the Right Targets Before You Spend a Pound

One of the most common measurement failures I see is campaigns launched without a clearly defined target. Not a vague aspiration to “drive growth” or “increase sales”, but a specific, commercially grounded number that tells you whether the campaign worked.

That target should be derived from your unit economics, not from industry benchmarks. Industry average CPAs are interesting context. They are not your business. Your target CPA should be set based on your margin, your LTV, and the payback period your business can sustain. A venture-backed startup with a long payback runway can afford a very different CPA than a bootstrapped business that needs to break even within 90 days.

The target-setting conversation is also where channel selection becomes clearer. Paid search captures existing demand efficiently but cannot create demand that does not exist. If your product is new or your category is not yet searched for, paid search will deliver low volume regardless of how well the campaign is structured. Paid social can build awareness and create demand, but the path to conversion is longer and the attribution is messier. Understanding what each channel can and cannot do informs both your targets and your measurement approach.

Set your targets before the campaign launches. Document them. Then measure against them with the same rigour you would apply to any other business investment. If the campaign does not hit the target, the question is not how to make the numbers look better. It is whether the target was wrong, the channel was wrong, or the execution was wrong.

The Practical Measurement Stack for Paid Advertising

You do not need a sophisticated data infrastructure to measure paid advertising ROI well. You need a small number of connected data points and the discipline to look at them together rather than in isolation.

At minimum, you need: ad platform data (spend, impressions, clicks, conversions), website analytics data (sessions, conversion rate, revenue), and business data (margin, return rate, repeat purchase rate). The connection between these three data sources is where most of the measurement value lives, and it is where most teams have gaps.

For paid social specifically, the gap between platform-reported conversions and actual business outcomes has widened as iOS privacy changes have limited the data available to platforms. Meta’s reported conversions, in particular, should be treated as directional rather than precise. Cross-referencing platform data against your own analytics and against actual revenue in your finance system is not optional if you want to make sound budget decisions. Semrush’s overview of paid social measurement covers some of the practical approaches for reconciling these data sources.

The other discipline worth building is a regular cadence of pausing underperforming elements and reallocating budget toward what is working. Granular control at the keyword and ad level has been available in Google Ads for years. The question is whether you are using it systematically or just letting campaigns run because nobody has time to review them. Budget optimisation is not a one-time setup task. It is an ongoing operational discipline.

I spent years judging the Effie Awards, which evaluate marketing effectiveness rather than creative quality. The campaigns that stood out were not the ones with the most sophisticated measurement frameworks. They were the ones where the team had been clear about what they were trying to achieve, honest about what the data showed, and willing to change course when the evidence pointed in a different direction. That combination is rarer than it should be.

There is more on how paid channels fit into a full acquisition strategy across the paid advertising section of The Marketing Juice, including how to think about channel mix and budget allocation across search, social, and programmatic.

Honest Approximation Over False Precision

The temptation in paid advertising measurement is to chase a single, clean number that tells you definitively whether your campaigns are working. That number does not exist.

What exists is a collection of imperfect signals that, read together and interpreted with judgment, give you a reasonable basis for decision-making. Attribution models are imperfect. Platform data is incomplete. Incrementality tests are expensive and time-consuming. LTV projections are estimates. None of that means measurement is pointless. It means you should hold your conclusions with appropriate confidence rather than false certainty.

The teams that get paid advertising ROI right are not the ones with the most sophisticated tools. They are the ones that have agreed on what they are measuring, connected their marketing metrics to their business metrics, and built a culture where the data is interrogated rather than celebrated. That is harder than buying a better attribution platform. It is also considerably more valuable.

Paid advertising, at its best, is one of the most accountable forms of marketing investment available. The accountability only materialises if you measure it honestly. And measuring it honestly starts with being clear about what you actually want to know, rather than defaulting to what is easiest to report.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a good ROI for paid advertising?
There is no universal benchmark because ROI depends entirely on your margin, customer lifetime value, and payback period requirements. A ROAS of 3x might be highly profitable for a high-margin product and loss-making for a low-margin one. Set your target based on your own unit economics rather than industry averages, and make sure you are measuring profit on ad spend rather than just revenue against spend.
Why does my platform ROAS not match my actual business results?
Platform ROAS measures revenue against ad spend only. It does not account for product margin, returns, agency fees, or any other costs. It also relies on attribution models that may be crediting your campaigns with conversions that would have happened anyway. Cross-referencing platform data against your finance system and running incrementality tests will give you a more accurate picture of actual business impact.
What is the difference between attribution and incrementality testing?
Attribution modelling distributes credit for conversions across the touchpoints in a customer experience, based on a set of rules or a machine learning model. Incrementality testing holds back a control group from seeing your ads and measures the difference in conversion rate between the exposed and unexposed groups. Attribution tells you which channels were present when conversions happened. Incrementality testing tells you which channels actually caused them.
How should customer lifetime value affect my paid advertising targets?
If you know that customers acquired through a particular channel have a higher average LTV than those from other channels, you can justify a higher CPA target for that channel. Setting a single CPA target across all channels ignores the downstream value difference between customer segments. Segment your customers by acquisition source and compare 12-month purchase behaviour to understand which channels are acquiring your most valuable customers, then set targets accordingly.
How has iOS privacy changes affected paid advertising measurement?
Apple’s App Tracking Transparency framework, introduced with iOS 14.5, significantly reduced the data available to ad platforms for tracking conversions on mobile devices. Meta in particular saw substantial drops in reported conversion data. The practical effect is that platform-reported conversions for paid social campaigns are now less complete than they were before, making it more important to cross-reference platform data against your own analytics and revenue data rather than relying on platform reporting alone.

Similar Posts