Most UK comms teams are stuck at Level 1 — counting clips and mentions — while telling leadership they measure "impact." This model gives you a honest assessment of where you are and a practical path to the next level. It is based on the AMEC Integrated Evaluation Framework and adapted for how UK corporate and agency teams actually operate.
Level 1: Output Counting
What It Looks Like
- Monthly reports listing coverage volume, mentions, and reach
- AVE (advertising value equivalent) still appears somewhere, possibly relabelled as "media value"
- Sentiment is reported as a percentage but nobody can explain the methodology
- The team uses Meltwater, Cision, or Signal AI primarily as a clipping service
- Reports are produced manually in PowerPoint, taking 2-3 days per month
What It Tells Leadership
Volume. How much coverage did we get? How many journalists mentioned us? The answer to "so what?" is always "more than last month" or "less than last month" with no explanation of why it matters.
How to Know You Are Here
If your monthly report could be produced by someone who has never read the coverage — just counted it — you are at Level 1.
What to Do Next
Pick one quality metric and add it to every report. The easiest starting point: message pull-through. For each piece of coverage, score whether your key message appeared (yes/no/partial). This takes 15 minutes per report cycle and immediately makes the data more useful.
Level 2: Quality and Outcome Measurement
What It Looks Like
- Coverage is scored by quality, not just counted — typical dimensions include message accuracy, spokesperson inclusion, outlet tier, and tone
- Share of voice is tracked against 3-5 named competitors, with quality weighting
- Sentiment analysis is done by human analysts (or AI with human review), not just automated keyword matching
- Reports include trend analysis over 6-12 months, not just month-on-month snapshots
- The team has a consistent tagging taxonomy for topics, campaigns, and spokespeople
What It Tells Leadership
Whether the coverage is saying what you want it to say, in the outlets that matter, relative to competitors. This is where comms reporting starts to answer strategic questions: "Are we winning the narrative on X?" "Is competitor Y getting better coverage than us on Z?"
Tools at This Level
- Meltwater or Cision with custom dashboards and Boolean queries
- Signal AI for narrative tracking and AI-assisted sentiment
- Brandwatch or Pulsar for social media analytics alongside earned media
- A simple BI tool (Looker Studio, Power BI, or Tableau) pulling data from your monitoring platform via API or export
Common Mistake: Fake Level 2
A FTSE 250 financial services firm told its board it had "outcome-based measurement." In practice, the team ran Meltwater's auto-sentiment on every clip and reported the percentage as if it were a validated metric. When the CFO asked why sentiment was 78% positive during a quarter when the FCA issued a public censure, nobody could explain. The team had automated output counting and called it outcomes. Real Level 2 requires human judgment on quality, even if AI assists with the initial sort.
Level 3: Business Impact Connection
What It Looks Like
- Coverage quality metrics are correlated with business KPIs: branded search volume, website traffic from earned media, inbound lead enquiries, recruitment application rates, or share price movement (for listed companies)
- Attribution models are documented with explicit assumptions — "we attribute 15% of the branded search uplift to earned media based on timing correlation and campaign isolation"
- Qualitative evidence is systematically collected: stakeholder interviews, sales team feedback, investor perception surveys
- Reporting is integrated with marketing and digital analytics, not siloed in a comms-only dashboard
- The comms team presents data to the board alongside marketing, not as a separate "PR report"
What It Tells Leadership
What communications contributed to business outcomes. Not "caused" — contributed. The distinction matters. Level 3 teams are explicit about the limitations of attribution (correlation, not causation; influence, not sole credit) and build trust by being honest about what the data can and cannot prove.
The Data Pipeline
At Level 3, you need data flowing from multiple systems:
- Media monitoring platform (Meltwater, Signal AI) for coverage data
- Google Analytics 4 for website traffic and referral attribution
- Google Search Console for branded search trends
- CRM (Salesforce, HubSpot) for lead source tracking
- Social listening (Brandwatch, Pulsar) for conversation data
- HR systems for recruitment and retention correlation
- Investor relations for perception survey data (if listed)
This typically requires either API integrations or a regular export-and-merge process in a BI tool. Budget 3-6 months to build this pipeline properly.
Level 4: Predictive and Prescriptive Analytics
What It Looks Like
- The team uses historical data to forecast the likely impact of planned campaigns
- Media monitoring feeds into real-time alerting that triggers pre-defined response protocols
- AI models identify emerging narratives before they reach mainstream coverage
- Scenario modelling helps leadership understand the reputational impact of strategic decisions before they are made
- Comms analytics informs resource allocation: budget moves toward what the data shows works
What It Tells Leadership
What is likely to happen and what to do about it. This is rare. Fewer than 5% of UK comms teams operate at this level, and those that do are typically in regulated industries (financial services, energy, pharma) where the cost of reputational failure justifies the investment.
What It Requires
- A dedicated analytics function (at least one full-time analyst)
- 18+ months of clean, consistently tagged historical data
- Signal AI, Brandwatch, or a custom data science capability
- Executive sponsorship and a culture that treats comms data as seriously as financial data
How to Progress: Practical Steps
From Level 1 to Level 2 (3-6 months)
1. Define a coverage quality scorecard with 4-5 dimensions (message pull-through, outlet tier, spokesperson inclusion, tone, link/call-to-action presence) 2. Score every piece of coverage for one quarter manually to establish a baseline 3. Set up competitor share of voice tracking for your top 3 competitors 4. Replace your monthly volume report with a quality-weighted dashboard 5. Kill AVE. Remove it from every report. If leadership asks for it, explain the AMEC position and offer quality scores instead
From Level 2 to Level 3 (6-12 months)
1. Build a branded search correlation analysis — plot your coverage spikes against Google Trends data for your brand name 2. Integrate Google Analytics referral data into your comms dashboard 3. Run a quarterly stakeholder perception survey (even 5 questions to 20 stakeholders is useful) 4. Document your attribution methodology and share it with the CFO/CMO before presenting results 5. Align your reporting cadence with the marketing team — same meeting, same format, combined dashboard
From Level 3 to Level 4 (12-24 months)
1. Hire or develop an analytics specialist within the comms function 2. Build 18 months of historical data with consistent tagging 3. Start with simple forecasting: "based on previous campaign performance, we expect X coverage quality from Y investment" 4. Pilot real-time narrative monitoring with Signal AI or Brandwatch alerts 5. Present a scenario model for one upcoming strategic decision
The Honest Assessment
Most UK comms teams should aim for solid Level 2 with elements of Level 3. That is enough to demonstrate value, inform decisions, and justify budget. Leaping to Level 4 without the foundations of Level 2 produces expensive dashboards that nobody trusts. Build the basics first.