ROAS Advertising Is Measuring the Wrong Thing
ROAS advertising is the practice of optimising paid media campaigns around return on ad spend, typically calculated as revenue generated divided by ad spend. It sounds like commercial rigour. In practice, it often isn’t.
The metric isn’t wrong. The problem is how it gets used: as a proxy for business health, as a ceiling on ambition, and as a reason to avoid the harder, more important question of whether your advertising is actually growing your business or just processing demand that already existed.
Key Takeaways
- ROAS measures efficiency within existing demand, not whether your advertising is creating new demand or growing your market.
- Optimising purely for ROAS can actively suppress growth by starving upper-funnel activity that builds future demand.
- A high ROAS on branded search often reflects marketing that would have happened anyway, not incremental value created by advertising.
- The most commercially useful version of ROAS is one calibrated against incrementality, not just attributed revenue.
- Brands that treat ROAS as a business strategy rather than a campaign metric tend to shrink their addressable market over time.
In This Article
- Why ROAS Became the Default Metric
- What ROAS Actually Measures (And What It Doesn’t)
- The Lower-Funnel Trap
- How to Set a ROAS Target That Actually Means Something
- Incrementality Testing: The Discipline ROAS Needs
- Platform ROAS vs. Business ROAS
- When a Lower ROAS Is the Right Answer
- Building a ROAS Framework That Serves the Business
Why ROAS Became the Default Metric
Performance marketing gave CFOs something they’d always wanted: a number that looked like proof. Revenue in, spend out, ratio produced. Clean. Auditable. Easy to put in a board deck.
I spent years working inside that system. Managing large paid media accounts across retail, finance, and travel, watching ROAS targets get set in January and then defended religiously for the rest of the year regardless of what the market was doing. The target became the strategy. That’s where things go wrong.
The appeal is understandable. Before digital performance marketing, advertising accountability was genuinely difficult. You ran a TV campaign and hoped the brand tracking moved. Now you can see cost-per-click, conversion rate, and attributed revenue in real time. That transparency created a gravitational pull toward lower-funnel activity, because lower-funnel activity produces the numbers that justify the budget.
The problem is that the numbers are easier to produce than they are to interpret. Attribution models assign credit. They don’t prove causation. And ROAS, built on top of those attribution models, inherits all of their limitations while presenting itself with the confidence of a financial statement.
If you’re thinking about where ROAS fits within a broader commercial strategy, the Go-To-Market and Growth Strategy hub covers the frameworks that give metrics like this their proper context.
What ROAS Actually Measures (And What It Doesn’t)
ROAS measures the revenue attributed to your ads divided by what you spent to run them. A 4x ROAS means every pound spent returned four pounds in attributed revenue. Simple enough.
What it doesn’t measure is whether those sales would have happened without the ad. That distinction matters enormously, and most ROAS reporting papers over it entirely.
Think about branded search. Someone already knows your brand. They type your name into Google. Your ad appears, they click, they buy. The platform records a conversion and attributes it to the campaign. Your ROAS looks excellent. But the person was already coming to your site. The ad didn’t create the sale. It just got in the way of the organic click and charged you for it.
I’ve sat in rooms where a brand’s paid search ROAS was being celebrated while their organic traffic was quietly declining. The two things were connected. The paid activity was cannibalising organic, and the ROAS figure was obscuring that. When we ran incrementality tests, the actual incremental return was a fraction of the reported ROAS. The headline number wasn’t wrong, exactly. It was just measuring something different from what everyone assumed it was measuring.
This is the core issue with using ROAS as a primary success metric. It tells you how efficiently your ads are processing demand. It tells you almost nothing about whether your advertising is generating demand.
The Lower-Funnel Trap
Earlier in my career, I put too much weight on lower-funnel performance. The numbers were compelling, the accountability was clear, and clients loved the reporting. It took a few years of watching businesses plateau before I understood what was happening.
Lower-funnel advertising is efficient because it targets people who are already close to buying. But that pool of people doesn’t grow on its own. It grows because upper-funnel activity, brand campaigns, content, awareness media, puts new people into consideration. If you starve that activity to protect your ROAS, you’re slowly shrinking the audience that your lower-funnel campaigns can reach.
It’s a bit like a clothes shop that only talks to people already standing at the till. The conversion rate looks incredible. But footfall is falling, and nobody’s asking why.
The ROAS optimisation loop accelerates this problem. Algorithms trained to maximise ROAS will find the easiest conversions first. Branded terms. Retargeting audiences. High-intent queries from people already in the funnel. Over time, spend concentrates in those areas because they produce the best numbers. The brand invests less in reaching new audiences. The addressable market narrows. Growth stalls. And because ROAS looks fine, nobody raises the alarm until it’s too late.
Vidyard’s analysis of why go-to-market feels harder than it used to touches on this dynamic. The channels that used to generate efficient demand are more competitive and more saturated. Relying on lower-funnel efficiency to carry growth is a strategy that worked better ten years ago than it does now.
How to Set a ROAS Target That Actually Means Something
If you’re going to use ROAS as a campaign metric, at least calibrate it properly. Most ROAS targets are set by working backwards from a desired margin or by copying what a competitor appears to be doing. Neither approach is particularly rigorous.
A defensible ROAS target starts with your unit economics. What is the gross margin on the product being advertised? What is the customer lifetime value, not just the first transaction? What percentage of buyers are genuinely new customers versus existing customers buying again? Once you have those numbers, you can set a ROAS floor that reflects actual business value rather than an arbitrary ratio.
The next step is separating your ROAS reporting by campaign type. Branded search and non-branded search should never be reported together. Retargeting and prospecting should never be reported together. Mixing them produces a blended figure that flatters the efficient campaigns and hides the performance of the ones doing the harder work of reaching new audiences.
When I was running agency teams across multiple verticals, one of the first things I’d do on a new account was pull the ROAS data apart by audience type. Almost every time, the blended number was being propped up by branded and retargeting activity. The prospecting campaigns, the ones actually doing the job of growing the business, were often running at a loss or close to it. That’s not always wrong. But it needs to be a conscious decision, not something hidden inside a blended metric.
BCG’s work on commercial transformation in go-to-market strategy makes a similar point about the difference between optimising existing revenue streams and building new ones. The metrics you use shape the decisions you make. If your metrics only reward efficiency, your decisions will only optimise for efficiency.
Incrementality Testing: The Discipline ROAS Needs
The most honest version of ROAS is one built on incrementality data. Incrementality testing asks a different question from standard attribution: not “which ad was last seen before the purchase?” but “would this purchase have happened without the ad?”
The methodology is straightforward in principle. You create a holdout group, a segment of your target audience that sees no advertising. You run your campaigns to the exposed group. After a set period, you compare conversion rates between the two groups. The difference represents the incremental lift your advertising actually generated.
In practice, incrementality testing requires discipline that most marketing teams resist. It means deliberately not advertising to some potential customers. It means accepting a period of reduced attributed revenue while the test runs. It means confronting the possibility that some of your best-performing campaigns are performing well on paper because they’re targeting people who were going to convert anyway.
I’ve seen the results of incrementality tests cause genuine discomfort in client organisations. Campaigns that looked like stars on ROAS reporting turned out to be largely non-incremental. Campaigns that looked average were driving most of the actual new business. The reporting had been telling a story that wasn’t quite true, and nobody had questioned it because the numbers were good.
That’s not a technology problem. It’s a culture problem. When ROAS is the metric that gets people promoted and budgets approved, there’s no incentive to test whether the ROAS is real.
Platform ROAS vs. Business ROAS
There’s a version of ROAS that exists inside your ad platform, and there’s a version that exists in your business. They are not the same thing, and treating them as equivalent is one of the most common and costly mistakes in performance marketing.
Platform ROAS is calculated using the platform’s attribution model. Google Ads, Meta, and every other major platform has a financial incentive to attribute as much revenue to itself as possible. That’s not a conspiracy. It’s just how the business works. Last-click, data-driven, and view-through attribution models all have legitimate uses, but they also all tend to overstate the contribution of the platform running them.
Business ROAS, by contrast, is calculated using your own revenue data matched against your own spend data. It accounts for the fact that a customer who saw a Facebook ad and a Google ad and an email before converting shouldn’t be counted three times. It reflects the actual cost of acquiring that revenue, not the cost as seen from inside one platform’s reporting dashboard.
The gap between platform ROAS and business ROAS varies by category and campaign type, but in my experience it is rarely small. On accounts running across multiple channels, the blended platform ROAS can be two or three times higher than the business ROAS when you reconcile properly against actual revenue. That gap represents real money being misallocated based on numbers that feel precise but aren’t.
Forrester’s research on intelligent growth models is relevant here. Sustainable growth requires honest measurement. Flattering measurement produces good-looking dashboards and poor decisions.
When a Lower ROAS Is the Right Answer
There are circumstances where accepting a lower ROAS is not a failure of commercial discipline. It is commercial discipline.
If you’re entering a new market, targeting a new customer segment, or launching a product with no existing demand, you should expect a lower ROAS than your established campaigns. The activity is building something. Judging it against the same ROAS target as your branded retargeting is like judging a seed by whether it’s a tree yet.
Similarly, if customer lifetime value is high, a lower ROAS on acquisition can be entirely rational. A business where customers spend significantly more over time than they do on their first order should be willing to acquire customers at a lower initial return. The ROAS on transaction one is not the ROAS on the relationship.
The Effie Awards, which I’ve had the chance to judge, consistently reward campaigns that took a longer view. The work that wins effectiveness awards rarely optimised for short-term ROAS. It built brand equity, reached new audiences, and created the conditions for sustained commercial performance. That work is harder to justify in a quarterly budget review. It is also, over time, more valuable.
The broader frameworks for thinking about growth investment, including when to accept lower short-term returns in exchange for longer-term market position, are covered in the Go-To-Market and Growth Strategy hub. The ROAS conversation doesn’t exist in isolation from those strategic decisions.
Building a ROAS Framework That Serves the Business
A ROAS framework that actually serves the business has a few components that most current approaches lack.
First, it distinguishes between campaign types and sets different expectations for each. Branded campaigns and retargeting should be held to a high ROAS standard because they’re operating in warm territory. Prospecting campaigns should be evaluated differently, with more weight given to new customer acquisition rate and long-term value.
Second, it connects ROAS to actual business outcomes rather than platform-attributed revenue. That means reconciling ad platform data against your own CRM and finance data regularly, not just trusting the dashboard.
Third, it incorporates some form of incrementality measurement, even if it’s not a continuous holdout test. Periodic geo-based tests or time-based experiments can give you a directional read on how much of your attributed ROAS is incremental versus captured demand.
Fourth, it uses ROAS as one input into a broader set of metrics rather than the single arbiter of campaign value. Market share, new customer acquisition rate, brand consideration, and category reach all matter. A business that grows its ROAS while losing market share is not winning.
Tools that help you track and model these relationships, from attribution modelling to growth analysis, are covered in resources like Semrush’s breakdown of growth tools and Crazy Egg’s analysis of growth frameworks. The tooling has improved significantly. The discipline required to use it honestly hasn’t always kept pace.
BCG’s work on brand strategy and go-to-market alignment makes a point that’s directly relevant: commercial performance requires alignment between brand investment and performance investment, not a zero-sum competition between the two mediated by ROAS targets.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
