AMEC Integrated Evaluation Framework: Stop Measuring PR Outputs and Start Proving Value

The AMEC Integrated Evaluation Framework is a structured methodology for measuring public relations effectiveness across the full communications chain, from activity and outputs through to the business outcomes that actually matter to senior stakeholders. It replaces the old habit of counting press clippings and media impressions with a seven-stage model that connects PR work to organisational goals.

If your PR reporting still leads with AVE (advertising value equivalent) or raw column inches, you are not measuring effectiveness. You are measuring effort, and dressing it up as impact.

Key Takeaways

  • The AMEC framework has seven stages: Objectives, Inputs, Activities, Outputs, Outtakes, Outcomes, and Impact. Most PR teams only measure the middle three.
  • AVE is not a measurement standard. It is a proxy metric with no agreed methodology, and AMEC formally rejected it in 2010 via the Barcelona Principles.
  • Outtakes, what audiences actually took away from communications, are the most underused and commercially important stage in the framework.
  • Proving PR’s contribution to business outcomes requires pre-agreed measurement architecture, not retrospective data gathering after a campaign has run.
  • The framework only works if PR objectives are written to be measurable from the outset. Vague objectives produce unmeasurable results, by design.

Why PR Measurement Has Always Had a Credibility Problem

I have sat on both sides of this problem. Running agencies, I watched talented PR teams produce genuinely good work and then undermine it completely with measurement reports that no CFO or commercial director would take seriously. Reach figures padded by syndication. Share of voice charts with no baseline. Coverage summaries that counted a two-line brand mention in a trade title the same as a 1,200-word feature in a national newspaper.

The problem was not dishonesty. It was the absence of a framework that connected PR activity to anything a business actually cared about. When you have no agreed measurement standard, you default to whatever makes the numbers look best. That is human nature, not malice.

AMEC, the International Association for Measurement and Evaluation of Communication, formalised the Integrated Evaluation Framework in 2016 after years of industry consultation. It built on the Barcelona Principles from 2010, which drew a line under AVE as a legitimate metric and established that outcomes matter more than outputs. The framework has been updated since, but the core logic remains: you cannot evaluate communications effectiveness without first defining what effectiveness looks like for your specific organisation and objective.

For a broader view of how measurement fits into communications strategy, the PR and communications hub at The Marketing Juice covers the strategic foundations that make frameworks like AMEC actually useful in practice.

What Are the Seven Stages of the AMEC Framework?

The framework runs sequentially, and that sequence matters. Each stage feeds the next. Skipping stages, or measuring them in isolation, produces the same distorted picture that PR has always struggled with.

Stage 1: Objectives

Everything starts here, and this is where most PR measurement fails before it has even begun. The framework requires that objectives be set at the start, that they be specific and measurable, and that they align to organisational goals rather than communications activity.

“Increase brand awareness” is not an objective. It is a direction. “Increase unaided brand recall among senior HR decision-makers in the UK by 8 percentage points over 12 months, measured via quarterly omnibus survey” is an objective. The difference is not pedantry. It is the difference between a PR programme that can prove its value and one that cannot.

When I judged the Effie Awards, the entries that fell apart fastest were the ones where the stated objective and the claimed result were measuring different things entirely. A brand would set an objective around consideration, then claim success based on reach. The two are not interchangeable, and any judge with commercial experience could see the gap immediately. The same logic applies here.

Stage 2: Inputs

Inputs are the resources committed to the programme: budget, time, creative assets, data, and the quality of the brief. This stage is often overlooked in evaluation because it feels administrative rather than analytical. But inputs matter for two reasons.

First, they establish the baseline for efficiency calculations. If you spent three times the budget of a comparable campaign and achieved similar outcomes, that is a meaningful finding, not a success story. Second, poor inputs explain poor outputs. A weak brief produces weak creative. Inadequate budget produces inadequate reach. Documenting inputs honestly prevents the retrospective excuse-making that damages agency-client relationships.

Stage 3: Activities

Activities are what the PR team actually did: media outreach, content creation, event management, influencer engagement, social publishing, spokesperson training. This stage is about documenting the work, not evaluating it. The evaluation comes later.

The common mistake is treating activity volume as a proxy for effectiveness. Sending 400 press releases is an activity. Whether those releases resulted in coverage that reached the right audience with the right message is an output question, not an activity question.

Stage 4: Outputs

Outputs are the immediate, tangible results of activities: media coverage secured, social posts published, events held, content pieces distributed. This is where most PR measurement stops, and it is a significant problem because outputs tell you almost nothing about whether the communications worked.

Coverage volume, reach, impressions, share of voice. These are outputs. They are worth tracking. But they are not evidence of effectiveness. A story reaching 2 million people who were never going to engage with your brand is worth considerably less than a story reaching 40,000 people who are actively evaluating a purchase in your category. The AMEC framework forces this distinction into the open.

Quality metrics belong here too: message delivery rate, tone of coverage, prominence of placement, spokesperson mentions, whether key messages appeared in the coverage rather than simply the brand name. These are still outputs, but they are more meaningful outputs than raw volume.

Stage 5: Outtakes

Outtakes are what the target audience actually took away from the communications: awareness created, messages recalled, attitudes shifted, intentions changed. This is the most underused stage in the framework and, commercially, the most important bridge between what PR produced and what the business experienced.

Measuring outtakes requires research. Surveys, tracking studies, social listening with sentiment analysis, search volume changes for branded terms. It is more expensive than counting coverage, which is why it gets skipped. But without outtakes data, you cannot distinguish between a campaign that reached the right people and changed their thinking and one that generated impressive coverage that nobody processed or remembered.

I have seen this gap exploited deliberately. Not often, but enough. Agencies presenting output data as if it were outcome data, knowing that clients without measurement sophistication would not push back. The AMEC framework makes that conflation much harder to sustain because the stages are explicitly separated.

Stage 6: Outcomes

Outcomes are the behavioural changes that result from the communications: website visits from PR-driven coverage, inbound enquiries, sales conversions, employee applications, policy changes, investor behaviour. These are the metrics that commercial directors and CFOs understand, which is precisely why they are so important to PR’s credibility.

Proving outcomes requires integration with other data systems. Web analytics, CRM data, sales pipeline reporting. PR teams that operate in isolation from these systems will always struggle to demonstrate outcomes, because the data lives elsewhere. Building the measurement architecture before a campaign launches, not after, is the only way to capture this data cleanly.

Attribution is genuinely difficult here. PR rarely operates in isolation. A customer who reads a positive feature about your brand and then converts three weeks later via a paid search ad is a PR-influenced conversion that paid search will claim entirely. The honest answer is that multi-touch attribution models are imperfect, but they are considerably more honest than either claiming full credit or claiming none. AMEC does not solve the attribution problem. It does force you to acknowledge it explicitly rather than pretending it does not exist.

Stage 7: Impact

Impact is the contribution of communications to the organisation’s overarching goals: revenue growth, market share, reputation scores, employee retention, regulatory relationships. This is the highest level of the framework and the hardest to measure directly, because at this level PR is one of many contributing factors.

The framework does not require PR to prove sole causation at the impact level. It requires honest attribution: what contribution did communications make, alongside other business activities, to the outcomes that matter most to the organisation? That is a more defensible and more intellectually honest position than either claiming PR drove all of it or retreating to “brand value is hard to measure” as a permanent excuse.

The Barcelona Principles: The Foundation the Framework Sits On

The AMEC framework did not appear from nowhere. It built on the Barcelona Principles, first agreed in 2010 and updated in 2015 and 2020. The principles established several positions that were genuinely contested at the time.

Goal setting and measurement are fundamental to communications planning. Media measurement requires quantity and quality. AVE is not the value of public relations. Social media can and should be measured. Outcomes are preferred to outputs. Business results should be measured where possible. Measurement and evaluation should be transparent, consistent, and valid.

The rejection of AVE was the most significant. Advertising value equivalent, the practice of multiplying editorial coverage by the equivalent advertising rate, has no agreed methodology, no empirical basis, and produces wildly different figures depending on which rate card and multiplier you use. It persisted for decades because it produced large, impressive-sounding numbers. The Barcelona Principles called it out directly. The AMEC framework completed the work by providing something to replace it with.

How to Apply the Framework in Practice

The framework is not self-implementing. It requires deliberate decisions at the planning stage of every PR programme. Here is how that looks in practice.

Start with the business objective, not the communications objective. What does the organisation need to achieve? Revenue from a new product line. Talent attraction in a competitive market. Regulatory goodwill ahead of a policy consultation. Reputation recovery after a crisis. The communications objective should be derived from the business objective, not invented independently and then retrofitted.

Define what success looks like at each stage before the programme begins. What outputs will you track? What outtakes research will you commission? What outcome data will you access, and from which systems? What impact metrics will you report against, and over what timeframe? These decisions are much harder to make retrospectively, and the data you need often does not exist if you have not planned for it.

Build measurement costs into the programme budget. Outtakes research costs money. Tracking studies cost money. Integration with CRM and analytics platforms requires time. If measurement is treated as an afterthought that comes out of whatever budget is left, you will always end up with output data dressed up as outcome data, because output data is cheap and outcome data is not.

When I was growing the agency from 20 to 100 people, one of the disciplines we built early was pre-campaign measurement planning. Every client brief had to answer: what data will prove this worked? Not “what will we measure?” but “what data, from which source, over what period, compared to what baseline?” It slowed the planning process slightly. It dramatically improved the quality of the work, because teams could not hide behind activity metrics when the evaluation framework was agreed upfront.

The Correlation Problem PR Shares With Every Other Marketing Discipline

One of the recurring issues I saw when judging awards was the conflation of correlation and causation. A brand would run a PR programme, brand metrics would improve, and the entry would claim causation. Sometimes that causal link was real. Often it was not, or at least could not be demonstrated from the evidence presented.

PR shares this problem with every other marketing discipline, but it is particularly acute because PR rarely has the controlled testing environments that digital channels can construct. You cannot A/B test a media campaign the way you can A/B test a paid search ad. You cannot hold a control group unexposed to earned media coverage the way you can exclude them from a display campaign.

The honest response to this is not to abandon outcome measurement. It is to be precise about what the evidence shows. “Brand consideration increased by 6 percentage points during the campaign period, against a flat trend in the prior quarter, among the audience segment most exposed to our media coverage” is a defensible claim. “Our PR programme drove a 6-point increase in brand consideration” is a stronger claim that the evidence may not fully support. The AMEC framework does not resolve this tension, but it does create the structure within which honest practitioners can be clear about what they are claiming and why.

There is a parallel in how some technology vendors present performance data. I was once shown a case study where an AI-driven creative personalisation platform claimed enormous efficiency gains. The baseline creative was genuinely poor. Replacing it with anything better would have produced gains. The attribution of those gains to the AI was a story about a low baseline, not about the technology. PR measurement has the same vulnerability: if your pre-campaign metrics were terrible, almost any activity will look like a success. The framework requires you to set an honest baseline and measure against it, not against zero.

Where Most PR Teams Fall Short

In practice, most PR measurement programmes measure outputs thoroughly, touch outtakes occasionally, and rarely get near outcomes or impact in any rigorous way. There are structural reasons for this.

PR teams often do not have access to the business data they would need to measure outcomes. Sales data sits with the commercial team. Website analytics sit with the digital team. CRM data sits with marketing operations. Without cross-functional data access, PR measurement stops at the edge of what the PR team can see, which is mostly coverage and reach.

Budget constraints are real. Tracking studies and omnibus surveys cost money that smaller PR programmes cannot justify. The framework does not require expensive primary research for every campaign, but it does require some investment in outtakes measurement if you want to make credible claims about effectiveness.

And some practitioners, frankly, do not want rigorous measurement because rigorous measurement produces results that are harder to spin. Output metrics are controllable in a way that outcome metrics are not. A media relations team can always generate more coverage if they need to hit a number. They cannot manufacture a shift in brand consideration by working harder.

The AMEC framework is a tool. Like any tool, it is only as useful as the commitment of the people using it. Organisations that treat it as a compliance exercise will get compliance-grade results. Organisations that treat it as a genuine discipline for understanding what their communications are achieving will get something considerably more valuable: an honest account of what is working, what is not, and where to focus next.

If you are building or rebuilding a PR measurement approach from scratch, the broader PR and communications resources at The Marketing Juice cover the strategic context that makes frameworks like AMEC worth implementing properly rather than superficially.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the AMEC Integrated Evaluation Framework?
The AMEC Integrated Evaluation Framework is a seven-stage model for measuring public relations and communications effectiveness. The stages run from Objectives through Inputs, Activities, Outputs, Outtakes, Outcomes, and Impact. It was developed by the International Association for Measurement and Evaluation of Communication and formally launched in 2016, building on the Barcelona Principles established in 2010.
Why did AMEC reject AVE as a PR measurement metric?
Advertising value equivalent (AVE) was rejected because it has no agreed methodology, no empirical basis, and produces inconsistent results depending on which rate card and multiplier is applied. It measures the cost of buying equivalent advertising space, not the value of editorial coverage or its effect on audiences. The Barcelona Principles formally rejected AVE in 2010, and the AMEC framework replaced it with an outcomes-focused approach.
What is the difference between outputs and outtakes in the AMEC framework?
Outputs are the immediate, tangible results of PR activity: coverage secured, reach achieved, content published. Outtakes are what the target audience actually took away from the communications: messages recalled, awareness created, attitudes changed, intentions shifted. Outputs tell you what PR produced. Outtakes tell you whether it worked on the audience it was designed to reach. Most PR measurement stops at outputs and never gets to outtakes.
How do you measure PR outcomes when attribution is difficult?
PR outcome measurement requires integration with business data systems including web analytics, CRM data, and sales reporting. Attribution is genuinely difficult because PR rarely operates in isolation, and customers interact with multiple touchpoints before converting. The honest approach is to document PR-influenced behaviour (such as website visits from coverage, or search volume increases during campaign periods) and report contribution rather than claiming sole causation. Multi-touch attribution models, while imperfect, are more defensible than either claiming full credit or none.
Do you need a large budget to implement the AMEC framework properly?
No, but you do need to allocate measurement budget deliberately rather than treating it as an afterthought. Outtakes research, such as tracking studies or omnibus surveys, adds cost. However, the framework scales to programme size. Smaller programmes can use lower-cost proxies for outtakes measurement, such as branded search volume changes or social listening with sentiment analysis. The non-negotiable requirement is planning the measurement architecture before the programme launches, not after, regardless of budget level.

Similar Posts