Advertisement

PR measurement has improved significantly since AMEC's Barcelona Principles killed AVE (advertising value equivalency) as a credible metric. But most UK comms teams still struggle with the practical question: what does the monthly or quarterly measurement report actually look like? This template gives you a structure that connects coverage to business outcomes, acknowledges what can and cannot be proven, and fits into a leadership meeting without needing a 20-minute explainer.

Report structure: five sections, four pages maximum

Section 1: Outcomes summary (half page)

Open with the business outcomes your PR work supports. Not "we got 47 pieces of coverage" -- that is an output, not an outcome.

Example outcomes framing:

Trust and licence to operate: Coverage of our regulatory compliance programme appeared in the FT, Insurance Times, and BBC Radio 4 (Today programme), reinforcing our positioning as a well-governed operator ahead of the FCA's spring review cycle.

>

Demand and pipeline support: The product launch announcement generated 23 Tier 1 and Tier 2 coverage pieces including a Guardian feature. Website traffic from media referrals increased 34% week-on-week, with 187 tracked visits from FT.com alone.

>

Risk reduction: No negative Tier 1 coverage this quarter. The monitoring programme detected and contained two emerging issues (trade press inquiry on pricing, social complaint thread on service delays) before they reached national media.

Each outcome should be one paragraph. State what happened, where it appeared, and what effect it had on a business metric. If you cannot connect to a business metric, state the coverage outcome and be honest about the attribution gap.

Section 2: KPI panel (half page)

A compact panel of 6-8 metrics with trend indicators. Use a table or a dashboard screenshot.

| KPI | This period | Previous period | Trend | Target | |-----|-----------|----------------|-------|--------| | Coverage volume (total) | 127 | 98 | +30% | 100+ | | Tier 1 coverage pieces | 14 | 11 | +27% | 12+ | | Quality-weighted SOV | 32% | 29% | +3pp | 30%+ | | Sentiment (% positive) | 61% | 58% | +3pp | 55%+ | | Message pull-through | 44% | 39% | +5pp | 40%+ | | Broadcast mentions | 6 | 4 | +2 | 4+ | | Website traffic from media | 1,247 | 931 | +34% | 1,000+ | | Share of search (branded) | 18% | 17% | +1pp | Track |

Notes on each KPI:

  • Coverage volume: Total articles, segments, and transcripts. Include broadcast.
  • Tier 1 coverage: FT, Guardian, Times, Telegraph, BBC, Sky News, Channel 4 News. This is the quality signal.
  • Quality-weighted SOV: Your mentions weighted by outlet tier, divided by the weighted total for your competitor set. More meaningful than raw SOV.
  • Sentiment: Automated (Meltwater, Signal AI) with manual correction for Tier 1 articles. State the methodology.
  • Message pull-through: Percentage of coverage that includes at least one of your 3-5 key messages. Requires manual coding for accuracy -- automated tools miss nuance here.
  • Broadcast mentions: Total mentions across your priority programme list. Include programme names in the appendix.
  • Website traffic from media: Use UTM parameters or Google Analytics referral data to track visits from media coverage URLs.
  • Share of search: Google Trends data for your brand name vs competitors. A lagging indicator, but useful for showing sustained awareness impact.

Section 3: Narrative analysis (one page)

Two to three paragraphs covering:

What changed and why: Explain the numbers. A 30% increase in coverage is meaningless without context. Was it driven by a planned announcement, a reactive issue, or external market conditions? Which outlets and journalists drove the volume?

What the competition did: If your SOV shifted, explain the competitive context. Did a competitor launch a product, face a crisis, or run a campaign? SOV is relative -- your performance is always in the context of what others did.

What to do next: Specific, actionable recommendations. "Continue the current briefing programme with FT and Times business desks" is useful. "Maintain our strong performance" is not.

Section 4: Coverage highlights and lowlights (half page)

Top 3 positive coverage pieces:

For each: outlet, headline, journalist, reach, and one sentence on why it matters.

Financial Times -- "UK insurers lead European peers on climate disclosure" -- [Journalist name] -- 830k daily digital readers. Our CEO was the lead quoted source, positioning us as a sector leader ahead of the FCA's sustainability disclosure rules.

Top 1-2 negative or risk coverage pieces:

Same format. Include what action was taken.

Guardian -- "Consumer groups question insurance pricing models" -- [Journalist name] -- 24.8m monthly unique visitors. We were mentioned as one of four insurers. No direct allegation, but the framing was negative. We issued a factual correction via the press office; the journalist updated the online article to include our response within 3 hours.

Do not hide negative coverage. If leadership discovers it independently, your credibility is damaged far more than if you surface it proactively with context.

Section 5: Methodology and assumptions (half page or appendix)

State clearly:

  • Source: Meltwater / Cision / Signal AI -- specify the platform.
  • Source set: Number of UK outlets monitored, broken down by type (national, trade, broadcast, online-only).
  • Sentiment methodology: Automated NLP with manual correction for all Tier 1 articles. Agreement rate between automated and manual: [X]%.
  • SOV competitor set: List the competitors and confirm the set has not changed since last period.
  • Message pull-through methodology: Manual coding against a defined message framework of [3-5] key messages. Coding by [name/team].
  • Attribution caveats: "Website traffic correlation does not imply causation. Media coverage is one of multiple inputs to brand awareness and should not be treated as the sole driver of web traffic or search volume changes."

This section protects your credibility. An executive who sees transparent methodology trusts the report. An executive who suspects cherry-picking will challenge the numbers in the meeting.

Reporting cadence

| Report type | Cadence | Audience | Length | |------------|---------|----------|--------| | Daily brief | Every working day | Comms team, head of comms, CEO (optional) | 1 page | | Weekly summary | Friday | Comms team, marketing | 1 page | | Monthly measurement report | First week of month | Head of comms, CMO, CEO | 3-4 pages (this template) | | Quarterly strategic review | End of quarter | Board/ExCo, investor relations | 4-6 pages with appendix |

The monthly report is the backbone. The daily brief and weekly summary feed into it. The quarterly review adds strategic context and longer-term trend analysis.

Common mistake: claiming too much attribution

A UK fintech reported to its board that PR coverage "generated GBP 2.1m in pipeline value" based on the fact that 12 inbound leads mentioned seeing media coverage. The board loved the number. The CFO asked how it was calculated. The PR team had multiplied average deal size by the number of leads who mentioned media -- with no control group, no multi-touch attribution, and no accounting for the concurrent paid advertising campaign.

The number was withdrawn in the next board meeting. The PR team's credibility took six months to recover.

Report what you can prove: coverage appeared, website traffic correlated, and leads mentioned media as one source. Do not manufacture an ROI figure unless your attribution model is robust enough to withstand CFO scrutiny. Honest reporting with acknowledged limitations is more credible than inflated claims.

Advertisement