Advertising Value Equivalency Is Not a Metric. It’s a Story You Tell Yourself
Advertising value equivalency, or AVE, is the practice of assigning a monetary value to earned media coverage by calculating what the equivalent space or airtime would have cost if you had bought it as paid advertising. It sounds like a measurement framework. It is not. It is a rationalisation tool, and a surprisingly durable one given how thoroughly it has been discredited by every serious body in the measurement profession.
AVEs persist because they produce large, impressive numbers that are easy to defend in a budget meeting. They persist because PR and communications teams, under pressure to demonstrate value, reach for the most tangible proxy they can find. And they persist because, in many organisations, nobody with real commercial authority has ever pushed back hard enough to make them stop.
Key Takeaways
- AVEs measure the cost of equivalent ad space, not the value of coverage, the quality of the audience, or any downstream commercial outcome.
- Multipliers applied to AVE figures (commonly 2x, 3x, or 5x for editorial credibility) have no empirical basis and exist purely to inflate reported numbers.
- The Barcelona Principles, adopted by the global measurement community, explicitly reject AVEs as a valid measure of PR or communications effectiveness.
- The real problem with AVEs is not that they are inaccurate, it is that they answer the wrong question entirely. Coverage value is not the same as business value.
- Organisations that replace AVEs with outcome-based measurement tend to make better channel decisions, not just better reports.
In This Article
I have sat in enough agency reviews and client debriefs to know how this plays out. A PR team delivers a campaign recap. The headline metric is a coverage value of £4.2 million, calculated against rate card advertising equivalency. The client nods. The budget gets renewed. Nobody asks whether any of that coverage moved a single person closer to buying anything. The number is big, it sounds credible, and everyone moves on. That is not measurement. That is theatre with a spreadsheet.
Where Does AVE Actually Come From?
The mechanics are straightforward. A piece of editorial coverage appears in a publication. The monitoring tool or PR agency identifies the approximate size of that coverage, whether a quarter page, a half page, a 30-second broadcast slot, and then looks up what a paid advertisement of equivalent size in that same publication would cost. That rate card figure becomes the AVE. Some agencies then apply a multiplier, typically between two and five, on the basis that editorial coverage carries more credibility than paid advertising and therefore deserves a premium valuation.
The multiplier is where the logic completely unravels. There is no agreed methodology for setting it, no empirical research that validates a specific figure, and no consistency across agencies or markets. One agency applies a 3x multiplier because a consultant suggested it a decade ago. Another uses 5x because it makes the numbers look better. The multiplier is not a measurement decision. It is a negotiation with reality.
For a broader view of how measurement sits within commercial growth strategy, the articles at The Marketing Juice growth strategy hub cover the full picture, from channel planning to go-to-market execution.
What the Barcelona Principles Actually Say
In 2010, the International Association for Measurement and Evaluation of Communication convened a global summit in Barcelona and produced a set of principles for PR measurement. Those principles were updated in 2015 and again in 2020. The rejection of AVEs has been consistent across all three versions. The Barcelona Principles state explicitly that AVEs do not measure the value of public relations and communications, that they measure the cost of media space, and that they should not be used as a proxy for outcomes.
This is not a fringe position. It represents the professional consensus of the global measurement community. And yet AVEs remain in wide use, particularly in markets where procurement teams want a single comparable figure and PR agencies have not found a compelling enough alternative to replace it with.
The Forrester view on intelligent growth frameworks is worth reading alongside this, because it frames the broader problem: organisations that conflate activity metrics with growth metrics tend to make systematically worse investment decisions. AVEs are a textbook example of an activity metric dressed up as an outcome metric.
Why AVEs Survive Despite Being Discredited
The honest answer is that AVEs survive because they are useful to the people who report them, not to the people who need to make decisions based on them. When I was running agency teams, I watched this dynamic play out repeatedly. A communications team under pressure to justify headcount or budget would reach for AVEs because they produced a number large enough to feel significant. The alternative, reporting on actual audience reach, message penetration, or brand perception shift, required more rigorous methodology and often produced less impressive-looking figures.
There is also an organisational inertia problem. Once a metric becomes embedded in a reporting template, it acquires a kind of institutional legitimacy that is hard to dislodge. Finance teams build it into benchmarking models. Procurement teams use it to compare agency performance. By the time anyone questions whether it measures anything real, it has been in the reporting pack for three years and nobody wants to be the person who explains why this year’s numbers look smaller.
I have seen this exact dynamic in performance marketing too, though the metric in question is usually last-click attributed revenue rather than AVE. The principle is identical: a metric that flatters the channel doing the reporting gets embedded in the measurement framework, and then the measurement framework becomes the reality rather than a representation of it. Go-to-market execution feels harder than it should partly because teams are handling by instruments that are not calibrated to the thing they actually care about.
What AVEs Cannot Tell You
The fundamental problem with AVEs is not imprecision. All measurement involves approximation, and honest approximation is fine. The problem is that AVEs answer a question nobody with a commercial objective should be asking.
Knowing that your coverage would have cost £800,000 to buy as advertising tells you nothing about whether anyone read it, whether those who read it were in your target audience, whether the message was accurate or favourable, whether it changed any perception, or whether it contributed in any way to a purchase decision. Rate card cost is a function of publication circulation and format size. It has no relationship to any of those things.
Consider a scenario I encountered more than once during agency years. A brand receives extensive coverage in a national newspaper following a product recall. The AVE for that coverage is significant, perhaps hundreds of thousands of pounds. By the logic of advertising equivalency, this is a communications success. In commercial reality, it is a crisis. The metric is not just unhelpful here. It is actively misleading.
Sentiment is one obvious dimension AVEs ignore entirely. Reach quality is another. A mention in a publication whose readership has zero overlap with your target customer is worth less than a smaller mention in a niche outlet that reaches exactly the right people. AVEs treat all coverage as equivalent, weighted only by size and publication rate card. That is not a measurement framework. It is a counting exercise.
What Should Replace AVEs?
This is where the conversation gets more difficult, because the honest answer is that better measurement is harder and more expensive than AVEs. It requires defining what you are actually trying to achieve before you start, and then building a measurement approach around those objectives rather than around what is easy to count.
For brand-building communications, the relevant metrics sit at the level of audience awareness, brand perception, and consideration. These require survey-based measurement or brand tracking studies, which cost money and take time. For product launches or campaign-specific PR, the relevant metrics might include direct traffic uplift, search volume changes for branded terms, or conversion rate changes in the period following significant coverage. None of these are as simple as multiplying column centimetres by a rate card, but all of them are more useful.
Reach and quality of reach matter more than equivalent cost. How many people actually saw the coverage? Were they in the target audience? Was the coverage in a context where they were likely to be receptive? These are harder questions to answer, but they are the right questions. Market penetration thinking is instructive here: growth comes from reaching people who do not yet know or consider you, and measurement should reflect whether communications are actually achieving that.
When I was growing a team at iProspect from around 20 people to closer to 100, one of the discipline problems we had to work through was the tendency for each channel team to report on the metrics that made their channel look best. SEO teams reported on rankings. Paid search teams reported on click volume. PR teams, where we had them, reported on coverage value. None of it connected to a shared view of commercial performance. Building a reporting framework that put business outcomes first, and channel metrics second, was one of the more important structural decisions we made. It was also one of the more uncomfortable ones, because it meant some channels that looked impressive on their own metrics looked considerably less impressive when measured against outcomes.
The Broader Measurement Problem AVEs Represent
AVEs are a symptom of a wider issue in marketing measurement: the tendency to optimise for reportability rather than accuracy. A metric that produces a clean, large, comparable number will always be attractive to teams under pressure to demonstrate value, regardless of whether it actually measures value. The incentive structure rewards metrics that look good in a slide deck, not metrics that drive better decisions.
When I judged the Effie Awards, what separated the entries that genuinely demonstrated effectiveness from the ones that were just well-produced case studies was almost always the quality of the measurement framework. The strongest entries defined clear objectives upfront, built measurement into the campaign design, and reported on outcomes that were plausibly connected to business performance. The weakest entries led with reach figures and coverage values and then made a leap of faith to commercial results that the data did not actually support.
The agile measurement approaches that Forrester has written about point toward a more iterative model: set objectives, measure against them, adjust. It is a more honest framework than the retrospective justification that AVEs typically support, where the coverage happens first and then the value is assigned to make it look worthwhile.
Feedback loops matter here too. Understanding how audiences actually respond to communications, rather than inferring value from rate card proxies, is the kind of grounded insight that makes measurement genuinely useful rather than decorative.
A Note on Honest Approximation
I want to be clear about something. The argument here is not that PR and earned media should be held to an impossibly precise standard that paid channels are not held to either. Attribution across all marketing channels involves approximation. Brand-building activity is genuinely difficult to tie directly to revenue in a way that satisfies a CFO. These are real problems, and they are not solved by pretending they do not exist.
The argument is that honest approximation is better than false precision. Saying “we reached approximately 2.4 million people in our target demographic, with predominantly positive sentiment, and brand tracking shows a 4-point uplift in consideration among exposed audiences” is an honest approximation. It involves uncertainty, but it is uncertainty about real things. Saying “our coverage was worth £3.8 million” is false precision. It is a specific number that implies a rigour the methodology does not support and that answers a question with no commercial relevance.
Marketing does not need perfect measurement. It needs measurement that is honest about what it knows, honest about what it does not know, and calibrated to the outcomes that actually matter to the business. AVEs fail on all three counts.
There is more on building commercially grounded strategy across the full go-to-market and growth strategy section at The Marketing Juice, including how measurement frameworks sit within broader channel and planning decisions.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
