Digital Advertising Benchmarks: What Good Looks Like

Digital advertising benchmarks are industry averages that tell you whether your campaign performance is above, below, or in line with what comparable advertisers typically achieve. They cover metrics like click-through rate, cost per click, conversion rate, and return on ad spend, broken out by channel and, in better data sets, by industry vertical.

The problem is that most marketers use them wrong. They treat benchmarks as targets rather than context, and that single misunderstanding costs them more than any poorly optimised campaign ever could.

Key Takeaways

  • Benchmarks are reference points, not targets. Optimising to hit an industry average is optimising to be ordinary.
  • A “good” CTR in one industry can be catastrophically bad in another. Vertical context is everything.
  • Your own historical data is almost always more useful than any published benchmark report.
  • Return on ad spend without margin context is a vanity metric. Revenue is not profit.
  • Benchmarks are most valuable at the planning stage, when you have no prior data to work from.

Why Benchmarks Exist and What They Actually Measure

Benchmark reports get published because there is genuine demand for them. Every marketing team at some point needs to answer the question: is this performance good? When you are starting a new channel, entering a new market, or presenting results to a board that has no frame of reference, external benchmarks fill a real gap.

The data behind most benchmark reports comes from aggregated platform data, advertiser surveys, or both. Google, Meta, and the major ad platforms release some of this directly. Third-party tools like SEMrush publish their own analyses. Industry bodies and agencies compile it from client portfolios. None of these sources are neutral, and none of them are complete.

What they measure is central tendency. The median or mean performance across a large, heterogeneous pool of advertisers. That pool includes well-run campaigns and badly-run ones, mature accounts and new ones, large budgets and small ones, sophisticated creative and stock-photo ads. When you see a benchmark figure, you are looking at the average of all of that combined.

Early in my career, I was running paid search for a travel client and inherited a campaign that the previous team had benchmarked against broad industry averages. The CTRs looked fine on paper. What nobody had noticed was that the account was almost entirely branded search, which inflates CTR considerably. Strip out the brand terms and the non-brand performance was well below average. The benchmark had been providing false comfort for months.

The Channels and the Numbers Worth Knowing

Rather than citing specific percentages that will be outdated within a year, it is more useful to understand the ranges and the logic behind them. Here is how the major channels tend to behave.

Paid search (Google Ads). Click-through rates on search ads vary enormously by industry. Legal, financial services, and insurance tend to see lower CTRs because competition is fierce and ads are more formulaic. Retail and e-commerce often see higher CTRs because intent is clearer and creative has more room to differentiate. Conversion rates follow a similar pattern, with industries where purchase decisions are simpler and lower-value tending to convert at higher rates than considered-purchase categories.

Paid social (Meta, LinkedIn, TikTok). CTRs on social are structurally lower than on search because the audience is not actively looking for what you are advertising. You are interrupting, not responding. LinkedIn CPCs are notably higher than Meta because the audience targeting is more precise and the B2B intent carries a premium. TikTok tends to show stronger performance for younger demographics and impulse-purchase categories, weaker for high-consideration B2B.

Display and programmatic. Display CTRs are low by any measure. That is not necessarily a problem if your objective is awareness or retargeting, but if someone is optimising a display campaign for clicks, they are measuring the wrong thing. The value of display is often in the view-through contribution to conversion, which most attribution models undercount.

Email. Open rates have been distorted since Apple’s Mail Privacy Protection changes in 2021. Any benchmark that does not account for this is comparing incompatible data. Click-to-open rate is now a more reliable signal than raw open rate for most senders.

If you are building out a broader go-to-market plan and want to understand how channel benchmarks fit into commercial planning, the Go-To-Market and Growth Strategy hub covers the wider framework.

The Metrics That Actually Matter by Objective

One of the most consistent mistakes I see in performance reviews is a mismatch between campaign objective and the metric being reported. This is not a junior-marketer problem. I have sat in board presentations at agencies managing serious budgets where the lead metric was CTR on a brand awareness campaign. CTR tells you almost nothing about awareness. It tells you whether someone was curious enough to click, which is a different thing entirely.

The right benchmark depends entirely on what you are trying to achieve.

For demand generation. Cost per qualified lead and lead-to-opportunity conversion rate matter more than cost per click. A campaign with a high CPC that generates leads that close at 40% is better than a campaign with a low CPC generating leads that close at 5%.

For e-commerce. Return on ad spend is the headline metric, but it needs to be read alongside average order value and margin. An ROAS of 4x sounds strong until you discover the margin is 15%, at which point you are spending more on advertising than you are making in profit. I have seen this exact situation play out at a retail client. The ROAS looked healthy in the dashboard. The P&L told a different story.

For brand building. Reach, frequency, and brand recall are the relevant metrics. Optimising for CTR on a brand campaign often degrades the creative because you end up writing direct-response copy for an awareness objective. The two do not mix cleanly.

For retention and lifecycle. Customer lifetime value, repeat purchase rate, and churn are the benchmarks that matter. Paid media is often the wrong tool here, but when it is used, the attribution needs to be set up to capture long-term value rather than last-click revenue.

How to Read a Benchmark Report Without Being Misled

Benchmark reports are published with good intentions and read with insufficient scepticism. Here is what to check before you use any published figures.

Sample composition. Who is in the data? A benchmark built from Fortune 500 advertisers is not relevant to a £50,000-a-month account. A benchmark built from small e-commerce businesses is not relevant to a B2B SaaS company. If the report does not tell you who contributed the data, treat it with caution.

Date range. Digital advertising performance shifts constantly. Platform algorithm changes, privacy regulation, new ad formats, and macroeconomic conditions all move the numbers. A benchmark from 2021 does not reflect post-iOS 14 attribution, post-pandemic travel demand, or current CPM inflation in certain categories. Check when the data was collected, not just when the report was published.

Metric definitions. Conversion rate means different things to different platforms. Google might count a store visit as a conversion. Meta might count a landing page view. Your own CRM might count a qualified sales call. If you are comparing your conversion rate to a benchmark without checking the definition, you may be comparing incompatible numbers.

Geographic scope. CPCs in the United States are structurally different from CPCs in Germany, the UK, or Southeast Asia. If your campaigns run in markets that are underrepresented in the benchmark data, the figures will not translate cleanly.

Tools like SEMrush’s growth toolset give you access to competitive intelligence that is more specific than most published benchmarks, because you can filter by geography, keyword category, and competitor set. That specificity is worth a lot more than a broad industry average.

When Your Own Data Is the Only Benchmark That Counts

There is a point in any account’s maturity where external benchmarks become largely irrelevant. Once you have 12 to 18 months of clean data, your own historical performance is a far more useful reference point than any published report. You know your audience, your creative, your landing page quality, your sales cycle, and your margin structure. The benchmark report does not.

When I was growing iProspect’s UK operation, we built internal benchmark databases across our client portfolio. The insight from comparing a retail client’s paid search performance against other retail clients in our portfolio, rather than against a published industry average, was substantially more actionable. We could see what was genuinely achievable in that specific competitive environment rather than what was average across a pool of accounts with very different characteristics.

The discipline of building your own benchmark baseline is straightforward. Track your core metrics monthly, segment by campaign type and objective, and note any significant changes in platform, budget, or strategy alongside the data. After a year, you have a reference point that is specific to your business. After two years, you can start to see seasonal patterns. After three, you have something genuinely predictive.

This is also where behavioural analytics tools add value that platform data alone cannot provide. Understanding why conversion rates move, not just that they moved, requires visibility into on-site behaviour. A benchmark tells you your conversion rate is below average. Session recordings and heatmaps tell you where users are dropping off and why.

The Benchmarks That Are Consistently Misused

Some metrics attract more misuse than others. These are the ones worth being particularly careful about.

Click-through rate. CTR is a measure of ad relevance and creative quality in a specific context. It is not a measure of campaign effectiveness. A high CTR on the wrong audience is expensive. A low CTR driving high-quality leads at acceptable cost is exactly what you want. I have seen campaigns get paused because the CTR was “below benchmark” when the cost per acquisition was the best in the account.

Cost per click. CPC is an input, not an outcome. Obsessing over CPC without reference to what those clicks do is like optimising your cost per job interview without caring whether the candidates are any good. The question is always what the click is worth, not what it costs.

Quality Score and Ad Relevance scores. These platform-generated scores are useful for diagnosing structural issues in an account, but they are not performance metrics. A Quality Score of 10 does not mean a campaign is profitable. It means the platform considers the ad well-matched to the keyword. Those are different things.

Engagement rate on social. Engagement rate is a measure of content resonance with an existing audience. It tells you very little about commercial impact. I judged the Effie Awards for several years and one of the consistent patterns in losing entries was campaigns with strong engagement metrics and weak business results. Likes are not revenue.

Commercial transformation in marketing, as BCG’s work on go-to-market strategy has noted, requires connecting marketing metrics to business outcomes rather than treating platform metrics as ends in themselves. That connection is where most performance reporting falls short.

How to Set Internal Benchmarks That Actually Drive Performance

Setting benchmarks internally is a discipline, not a one-time exercise. Here is a process that works in practice.

Start with the business metric and work backwards. If your business needs a cost per acquisition of £80 to be profitable, that is your benchmark. Everything else, CPC, CTR, conversion rate, is a diagnostic input that helps you understand why you are hitting or missing that number.

Segment before you average. Averaging performance across all campaigns obscures what is actually happening. A brand campaign and a competitor-conquesting campaign should not share the same CTR benchmark. A retargeting campaign and a prospecting campaign should not share the same conversion rate benchmark. Build separate baselines for each campaign type.

Set a floor, not a ceiling. Benchmarks should define the minimum acceptable performance, not the target. If your floor is an ROAS of 3x, your target should be higher. Using the benchmark as the target means you are optimising to be average.

Review quarterly, not annually. Platform changes, competitive shifts, and seasonal variation mean that a benchmark set 12 months ago may no longer be relevant. A quarterly review cycle keeps your baselines current without creating so much administrative overhead that it stops happening.

If you want to think about benchmarking in the context of a broader growth strategy, including how channel performance connects to market penetration and customer acquisition economics, the Go-To-Market and Growth Strategy hub is a good place to continue.

A Note on Attribution and What It Does to Benchmarks

Attribution is the elephant in the room in any conversation about digital advertising benchmarks. The performance figures you see in any platform are a function of the attribution model that platform uses, and most platforms use models that favour themselves.

Last-click attribution overstates the contribution of bottom-of-funnel channels like branded search and retargeting. First-click attribution overstates the contribution of awareness channels. Data-driven attribution is better in theory but requires significant conversion volume to be statistically reliable, and even then it operates as a black box.

When you compare your ROAS to a benchmark, you are comparing attribution-model outputs, not actual business results. Two advertisers with identical real-world performance can show very different ROAS figures depending on their attribution settings. This is not a theoretical problem. It is a practical one that affects budget allocation decisions every day.

The most honest approach I have found is to triangulate. Look at platform-reported performance, look at your CRM data, and look at revenue trends. When all three tell a consistent story, you can have reasonable confidence in your benchmarks. When they diverge, you have an attribution problem that no benchmark report will solve.

Tools that support conversion optimisation and growth analysis can help bridge the gap between platform data and actual on-site behaviour, giving you a more complete picture than either source provides alone.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a good click-through rate for Google Ads?
There is no single answer because CTR varies significantly by industry, match type, ad position, and whether you include branded terms. Search ads in competitive categories like insurance and legal tend to see lower CTRs than retail or travel. A more useful question is whether your CTR is improving over time and whether the clicks you are generating convert at a profitable rate. A high CTR with poor conversion quality is not a good result.
How do I know if my cost per acquisition is competitive?
Start with your own unit economics rather than external benchmarks. If your average order value is £200 and your gross margin is 40%, you have £80 of margin to work with before you need to account for other costs. That gives you a maximum CPA of something below £80. Whether that is competitive depends on your category, but the internal constraint is more important than the industry average. Published CPA benchmarks are useful for a rough sense check, not for setting strategy.
Are digital advertising benchmarks different for B2B and B2C?
Yes, substantially. B2B campaigns typically see lower CTRs and conversion rates but higher average deal values, which means a higher CPA can still be profitable. B2B sales cycles are also longer, which makes last-click attribution particularly misleading. B2C benchmarks tend to be higher-volume and more platform-standard, but vary widely by category. Any benchmark report that does not separate B2B and B2C figures is averaging across fundamentally different buying behaviours.
How often should I update my advertising benchmarks?
Internal benchmarks should be reviewed quarterly. Platform algorithm changes, shifts in competitive intensity, and seasonal variation can all move your baseline meaningfully within a few months. Annual reviews are too infrequent to catch these shifts in time to act on them. External benchmark reports are worth consulting when you are entering a new channel or market, but should be treated as directional rather than definitive.
What is a realistic return on ad spend for e-commerce?
ROAS varies too much by category, margin structure, and channel mix to give a single figure that means anything. A business with 60% gross margins can be profitable at an ROAS of 2x. A business with 15% gross margins may need an ROAS of 8x or higher to break even on advertising. The right question is what ROAS you need given your specific margin structure, not what the industry average is. Calculate your break-even ROAS first, then use external benchmarks to assess whether that target is achievable in your category.

Similar Posts