Advertisement

PR measurement in the UK has a credibility problem. Too many teams report metrics that leadership does not understand, use methodologies that change every quarter, or over-promise attribution that cannot be proven. This framework follows the Barcelona Principles (adopted by the AMEC, CIPR, and PRCA) and builds a reporting system that is honest, consistent, and tied to decisions.

1) Start With Outcomes, Not Metrics

Before choosing any KPI, define what PR should change. Write two to three outcome statements that leadership agrees on.

Common PR outcomes for UK organisations:

| Outcome | What it looks like | How to measure | |---|---|---| | Reputation protection | Positive or neutral tone in Tier 1 outlets (BBC, FT, Guardian, Times, Telegraph, Sky News) | Sentiment trend + coverage quality score | | Demand generation | Increased website traffic, enquiries, or applications following coverage | Referral traffic via UTM / Google Analytics, enquiry volume | | Risk reduction | Early detection of negative narratives; fast response to issues | Response time metrics, crisis coverage volume vs baseline | | Stakeholder trust | Consistent messaging across audiences; favourable regulatory and investor engagement | Message pull-through, stakeholder survey scores | | Employer brand | Improved recruitment pipeline and perception among candidates | Application volume correlated with coverage, Glassdoor trends |

Leadership must sign off on these outcomes before you build the dashboard. If they do not agree on what PR should achieve, measurement will never satisfy them.

2) Separate Inputs, Outputs, and Outcomes

This is the single most important structural decision in PR measurement. Conflating inputs with outcomes is what makes PR reporting look fluffy.

  • Inputs: What the comms team did. Pitches sent, briefings held, press releases distributed, spokesperson appearances.
  • Outputs: What happened as a result. Coverage volume, outlet tier, message accuracy, share of voice.
  • Outcomes: What changed because of the coverage. Perception shifts, stakeholder behaviour, risk mitigation, business impact.

Report inputs to the comms team internally. Report outputs monthly to the CCO. Report outcomes quarterly to the board. Never present inputs to leadership as if they are outcomes -- "we sent 47 pitches this month" is an input, not a result.

3) Build a Six-Metric Dashboard

A strong executive dashboard for UK PR measurement includes:

Coverage Quality Score

A composite metric combining outlet tier, message accuracy, spokesperson inclusion, and tone. More useful than raw volume.

How to calculate: Assign Tier 1 outlets (BBC, FT, Guardian, Times, Telegraph, Sky News) a weight of 3x, Tier 2 (trade titles like Insurance Times, PR Week, Citywire, The Grocer, Health Service Journal; major regionals like Evening Standard, Scotsman) a weight of 2x, and Tier 3 a weight of 1x. Add +1 for key message present, +1 for positive tone, -1 for negative tone. Sum and trend monthly.

Automate this in Meltwater using custom tags and source groups, or in Signal AI using source metadata and topic classification.

Message Pull-Through

Percentage of Tier 1 and Tier 2 coverage that includes at least one of your three to five defined key messages.

Benchmark: Well-run UK programmes achieve 35-50% pull-through in Tier 1 outlets. Below 25% means your messages are not landing. Above 60% is exceptional and usually campaign-specific.

Human review is more reliable than keyword matching for this metric. Automated tools miss paraphrasing and context.

Quality-Weighted Share of Voice

Your brand's share of total coverage versus three to six named competitors, weighted by outlet tier. See the separate SOV calculation guide for the methodology.

Sentiment Trend

Report negative coverage as a percentage of total, plotted weekly. For most UK corporates, the negative baseline sits between 8-15%. A sustained move above 20% warrants investigation. A spike above 30% is crisis territory.

Automated sentiment in Meltwater, Signal AI, and Brandwatch is directionally useful but review Tier 1 coverage manually -- UK English sarcasm and understatement confuse algorithms.

Business Proxy

One metric that connects media activity to a business signal. Options:

  • Website traffic from earned media referrals (tracked via UTM or referral source in Google Analytics)
  • Inbound enquiry volume in weeks following major coverage
  • Recruitment application rates correlated with employer brand coverage
  • Investor meeting requests following financial media coverage

Present correlation, not causation. "Earned media referral traffic was 3.2x higher in weeks with Tier 1 coverage" is credible. "PR generated GBP 2.4 million in revenue" is not.

Response Rate / Relationship Health

A leading indicator: pitch-to-response rate from target journalists (target 10-15% cold, 25%+ warm), repeat coverage from the same journalist, and reactive enquiry response time against SLA.

4) Use Stable Benchmarks

Benchmarks only work when they are consistent:

  • Keep the competitor set stable for a quarter. Document any changes with rationale.
  • Use the same time window each period. Do not compare a three-week December against a full January.
  • Keep the source tier definitions unchanged for at least six months.
  • When you change methodology, restate the previous period for comparison.

Common Mistake: The Shifting Baseline

A UK professional services firm changed its competitor set twice, its sentiment methodology once, and its source tier definitions once -- all within a single year. When the CCO presented the annual review to the board, every quarter showed a different baseline. The board questioned the credibility of all the numbers. The fix: lock the methodology at the start of each year, document it in a one-page appendix to every report, and flag any changes with a restatement.

5) Add Narrative to Every Report

Numbers without narrative are ignored. Every dashboard delivery should include a three-to-five sentence summary:

This quarter: Quality-weighted share of voice increased from 22% to 27%, driven by FT and Times coverage of our annual results and a BBC interview with the CEO on consumer duty. Message pull-through in Tier 1 outlets rose to 44%, up from 31% last quarter, reflecting improved spokesperson preparation. Negative sentiment remained at 12%, within the normal range. Website referral traffic from earned media was 2.8x higher than the previous quarter. Recommended action: sustain results-period momentum with a follow-up data study targeting trade titles.

The narrative answers: what changed, why, and what should we do about it.

6) Drop AVE

Ad Value Equivalency estimates what coverage would cost as advertising. The CIPR, PRCA, and AMEC have all formally recommended against using it. The Barcelona Principles (updated 2020) explicitly state that AVEs are not the value of communications.

If leadership asks for AVE, present the coverage quality score and business proxy metric instead. These are defensible. AVE is not.

7) Set a Reporting Cadence

| Cadence | Audience | Content | |---|---|---| | Weekly | Comms team | Operational output: volume, alerts, emerging risks, daily brief quality | | Monthly | CCO | Full dashboard: all six metrics with narrative, trend charts, competitor comparison | | Quarterly | Board / SLT | One-page summary: outcome metrics, narrative, recommended strategic actions | | Annual | Board | Year-in-review: outcome trends, benchmark comparison, next year's measurement priorities |

The monthly report is where most of the analytical work happens. The quarterly board report should be one page with no more than six numbers and a three-sentence summary.

8) Document Assumptions

Every metric has limits. State them explicitly in your methodology appendix:

  • Sentiment analysis accuracy is directional, not absolute. Tier 1 coverage is human-reviewed; Tier 2 and 3 rely on automated scoring.
  • Business proxy metrics show correlation, not causation. PR is one channel among many.
  • Share of voice is UK-only, earned media only, and excludes social media unless stated otherwise.
  • Message pull-through is coded by [human / automated keyword matching] with a review sample of [X%].

Transparency builds trust. When leadership understands the limits, they value the data more.

FAQ

How many PR KPIs should we report?

Five to eight KPIs is usually enough for leadership.

Can PR measurement prove ROI?

It can show directional impact and correlation, but direct attribution is limited.

Should we use AVE?

Most modern frameworks avoid AVE because it does not reflect outcomes.

What is the most reliable PR metric?

Coverage quality and message pull-through are often more reliable than volume.

Advertisement