Advertisement

Most media relations teams report on volume — number of pitches sent, number of articles generated, total reach. These metrics tell leadership almost nothing useful. They cannot distinguish between a throwaway mention in a regional blog and a lead quote in the Financial Times. Here are the metrics that actually inform decisions, how to collect them, and what benchmarks look like for UK comms teams.

Metric 1: Journalist Response Rate

What It Is

The percentage of pitched journalists who respond (positively or negatively) within 5 working days.

Why It Matters

Response rate is a leading indicator. If journalists are not responding, your targeting or angles are off. If they are responding but not covering, your assets or timing are the problem. The metric tells you where the funnel is broken.

How to Track It

Use a simple spreadsheet or CRM system. For each pitch:

  • Journalist name and outlet
  • Date sent
  • Response received (yes/no)
  • Response type (coverage, request for more info, decline, no response)

Meltwater and Cision both offer outreach tracking modules, but a Google Sheet works for teams sending under 50 pitches per month.

Benchmarks

  • Mass distribution (100+ recipients): 2-5% response rate. This is why mass distribution is wasteful.
  • Targeted outreach (20-40 recipients, personalised): 15-25% response rate.
  • Highly targeted (5-10 recipients, relationship-based): 35-50% response rate.

If your targeted outreach is below 15%, the problem is one of: wrong journalists, weak angle, or poor timing.

Metric 2: Coverage Quality Score

What It Is

A composite score applied to every piece of coverage, based on factors that matter to your organisation.

A Scoring Framework That Works

| Factor | Score | Criteria | |---|---|---| | Outlet tier | 1-3 | Tier 1 national/top trade = 3, Tier 2 regional/specialist = 2, Tier 3 online/blog = 1 | | Message pull-through | 0-3 | No key message = 0, passing mention = 1, message present = 2, message leads the piece = 3 | | Spokesperson quoted | 0-1 | Named spokesperson quoted = 1, no quote = 0 | | Brand prominence | 0-2 | Brand in headline = 2, brand in first 3 paragraphs = 1, mentioned later = 0 | | Link included | 0-1 | Followed link to your site = 1, no link = 0 |

Maximum score: 10 per piece of coverage.

A piece in the Guardian that leads with your key message, quotes your CEO, and links to your website scores 10. A mention in a niche blog with no message and no link scores 1-2.

How to Use It

  • Calculate an average quality score per campaign, per quarter, and per journalist
  • Track the score over time — it should trend upward as your targeting and messaging improve
  • Use it to weight your share of voice calculations: quality-adjusted SOV is far more useful than raw mention count
  • Report the number of "tier 1 quality placements" (score 7+) alongside total coverage volume

Common Mistake: Counting All Coverage Equally

A UK tech company reported to its board that it had earned "142 pieces of coverage" in Q3. The board was impressed. When the comms team scored the coverage, 118 of the 142 pieces were aggregator sites, content syndications, and low-authority blogs that reproduced a press release. Only 8 pieces scored above 7. The quality report told a very different story: the company had 8 meaningful placements, not 142.

Metric 3: Share of Voice (Quality-Weighted)

What It Is

Your brand's share of coverage within a defined topic or sector, weighted by coverage quality.

How to Calculate

1. Define the topic or sector (e.g., "UK fintech regulation") 2. Identify 3-5 competitors 3. Track all coverage mentioning any of the brands within the topic over a set period (monthly or quarterly) 4. Apply quality scores to every piece 5. Calculate: Your brand's total quality points / All brands' total quality points = Your quality-weighted SOV

Tools

Meltwater and Signal AI both support competitive tracking dashboards with customisable quality weighting. Cision offers similar functionality through its Analytics module. For smaller teams, manual tracking in a spreadsheet with 3-5 competitors is feasible for monthly reporting.

Benchmarks

There is no universal benchmark — SOV depends on your sector and competitive landscape. What matters is the trend:

  • Increasing quality-weighted SOV = your media strategy is working
  • Decreasing quality-weighted SOV = competitor is outmanoeuvring you or your output has declined
  • Stable SOV with increasing quality score = you are earning better coverage from less outreach (efficiency gain)

Metric 4: Relationship Depth Index

What It Is

A measure of the strength of your relationships with priority journalists, based on observable behaviour over time.

How to Score It

| Signal | Points | |---|---| | Journalist published one piece mentioning your brand | 1 | | Journalist proactively contacted you for comment | 3 | | Journalist accepted a briefing invitation | 2 | | Journalist published a second piece within 6 months | 2 | | Journalist included your spokesperson as a named source | 2 |

Score each priority journalist quarterly. Track the trend.

Why It Matters

Media relations is a relationship business. A team that has 5 journalists scoring 8+ will consistently outperform a team with 50 journalists scoring 1-2. Depth beats breadth.

How to Build It

Identify your top 15-20 priority journalists. These should be the journalists at outlets that matter most to your objectives. For a UK financial services company, this might include:

  • 2-3 at the FT (banking correspondent, personal finance editor, Lex column)
  • 2 at the Times (business editor, money section)
  • 1-2 at the Telegraph (money and City sections)
  • 1-2 at BBC News (business correspondent, sector specialist)
  • 1 at Sky News (business producer)
  • 3-4 at trade outlets (Citywire, Insurance Journal, FT Adviser)

Assign each journalist a relationship owner on your team. That person is responsible for maintaining the relationship through regular (not excessive) contact — sharing useful data when you have it, responding quickly to their requests, and meeting for coffee or lunch 1-2 times per year.

Metric 5: Earned Media Referral Value

What It Is

The traffic and engagement that earned media coverage drives to your digital properties.

How to Track It

In Google Analytics 4: 1. Create a custom channel group called "Earned Media" that captures referral traffic from known media domains 2. Track sessions, engagement rate, average engagement time, and conversion events from this channel 3. Segment by outlet to identify which relationships drive the most valuable traffic

Benchmarks for UK Media Referral Traffic

Based on typical UK corporate experience:

| Outlet | Typical Referral Sessions (Per Article With Link) | |---|---| | BBC News | 5,000 - 20,000 | | Guardian | 2,000 - 8,000 | | Daily Mail Online | 3,000 - 15,000 | | FT | 500 - 3,000 (but high-value, engaged audience) | | Times | 500 - 2,000 (paywalled, so lower volume) | | Trade press (Citywire, etc.) | 200 - 1,500 (highly targeted) |

The FT drives less traffic than the BBC but the visitors are often more commercially valuable. Track downstream behaviour (time on site, pages per session, conversion rate) to understand which outlets drive quality traffic, not just volume.

Putting It Together: The Monthly Report

Structure your monthly media relations report as:

1. Executive summary (3 bullet points: best result, biggest challenge, recommended action) 2. Coverage quality dashboard (total placements, average quality score, trend vs. previous period) 3. Share of voice (competitive positioning chart) 4. Top 5 placements (with quality scores and links) 5. Relationship health (priority journalist engagement summary) 6. Referral traffic (earned media contribution to web traffic and conversions) 7. Next month's plan (target outlets, planned campaigns, spokesperson availability)

This report should take no more than 2 hours to produce if your tracking systems are set up correctly. If it is taking longer, invest time in automating data collection — Looker Studio connected to GA4 and your monitoring platform export will do most of the work.

What Not to Report

  • AVE (Advertising Value Equivalent): Formally rejected by AMEC, CIPR, and PRCA. Including it signals that your measurement is not credible.
  • "Opportunities to see" / total reach: These numbers are inflated to meaninglessness. The BBC website gets 500M monthly visits — that does not mean 500M people saw your quote in one article.
  • Raw sentiment percentages from automated tools: Automated sentiment from Meltwater or Cision is 60-75% accurate. Reporting it to one decimal place implies a precision that does not exist.
Advertisement