PR teams face constant pressure to prove ROI the way performance marketing does — with precise, click-level attribution. That is not possible for earned media, and pretending otherwise destroys credibility with CFOs who understand data. This guide explains what PR attribution can realistically demonstrate, where the limits are, and how to present findings honestly to UK leadership teams.
What PR Attribution Can Prove
Direct Referral Traffic
When a journalist includes a link to your website in a piece of coverage, you can track the referral in Google Analytics 4. This is the most defensible PR metric available:
- Referral sessions: Exact count of visits from each media outlet
- Behaviour after referral: Pages viewed, time on site, conversion events
- Source quality: Which outlets drive engaged visitors vs. bounce traffic
A single BBC News article with a link can drive 5,000-20,000 sessions in 48 hours. A Guardian piece typically delivers 2,000-8,000. Specialist trade coverage (Citywire, Insurance Journal, PR Week) drives lower volume but often higher engagement and conversion rates.
Set up GA4 referral reports before your campaign launches, not after. Create a custom channel group for "Earned Media" that captures all known media domains.
Share of Voice Trends
Monitoring tools like Meltwater, Signal AI, and Cision can track your brand's share of voice against named competitors over time. This is reliable as a trend indicator:
- "Our share of voice on ESG topics increased from 12% to 23% in Q3, while Competitor X dropped from 30% to 18%"
- "We were quoted in 45% of coverage about the FCA's new Consumer Duty rules, up from 20% pre-campaign"
Share of voice data is particularly useful for UK regulated industries where being seen as a credible voice on policy matters directly affects regulatory relationships.
Message Pull-Through
You can objectively measure whether your key messages appeared in coverage. This requires human coding (or AI with human review), not automated sentiment:
- Score each piece of coverage: key message present (yes/partial/no)
- Track pull-through rate as a percentage over time
- Compare pull-through across different campaigns, spokespeople, and outlet types
A well-executed campaign should achieve 60-80% message pull-through in tier 1 coverage. Below 40% means your messaging is not landing, regardless of volume.
What PR Attribution Cannot Prove
Causation of Business Outcomes
This is the hard truth that PR teams need to internalise: you cannot prove that coverage caused a sale, a policy change, or a share price movement. You can show correlation and contribution, but not causation. Here is why:
- Multi-touch problem: A prospect who reads about you in the FT may also have seen your LinkedIn ad, received a sales email, and heard about you from a colleague. Attributing the conversion to PR alone is dishonest.
- Timing fallacy: Coverage appeared in Week 3, leads spiked in Week 4. But marketing also ran a webinar in Week 3 and the sales team launched a new outbound campaign. The timing correlation is real; the causal claim is not.
- Unmeasurable influence: A board member reads a positive profile of your CEO in the Sunday Times. Six months later, they recommend your firm for a contract. That influence is real but impossible to attribute.
Precise ROI Calculations
"We spent 50k on PR and generated 200k in revenue" — this statement requires a level of attribution precision that earned media simply does not support. Unlike paid media, where you can track a click from ad to conversion, earned media influence is diffuse, delayed, and entangled with other channels.
Sentiment Accuracy
Automated sentiment analysis from monitoring tools is unreliable at the individual article level. Meltwater, Cision, and Brandwatch all acknowledge accuracy rates of 60-75% for automated sentiment in English-language media. That means 1 in 4 articles is miscategorised. Reporting sentiment to one decimal place ("sentiment improved from 72.3% to 74.1%") implies a precision that does not exist.
How to Present Attribution Honestly
Use "Contribution" Language
Replace "PR generated" with "PR contributed to." Replace "PR drove" with "PR was a factor in." This is not weakness — it is credibility.
Example: "Earned media coverage in the FT, Guardian, and BBC during Q3 coincided with a 35% increase in branded search volume. While we cannot isolate PR's contribution from other marketing activity, the timing correlation and the fact that 60% of new site visitors in this period arrived via media referrals suggest significant PR contribution."
Build a Contribution Dashboard
Combine these metrics on a single page:
| Metric | Source | What It Shows | |---|---|---| | Referral traffic from media | GA4 | Direct, trackable visits | | Branded search volume trend | Google Search Console / Trends | Awareness proxy | | Share of voice (quality-weighted) | Meltwater / Signal AI | Competitive positioning | | Message pull-through rate | Manual coding | Narrative control | | Inbound enquiry volume (with "how did you hear about us" data) | CRM | Self-reported attribution | | Social amplification of coverage | Brandwatch / Pulsar | Reach extension |
Common Mistake: The AVE Slide
A UK professional services firm included an AVE calculation in its board pack: "This quarter's coverage was worth 1.2M in equivalent advertising." The CFO — who understood media buying — pointed out that nobody would actually buy full-page ads in the specific publications covered, that the "rate card" values used were fictional, and that the calculation implied editorial coverage had the same impact as advertising. The comms director lost credibility on measurement for the rest of their tenure. AVE has been formally rejected by AMEC, the CIPR, and the PRCA. Do not use it, even if your monitoring tool still offers it as a feature.
The Stakeholder Feedback Loop
For influence that cannot be tracked digitally — investor perception, regulatory relationships, policy impact — use structured qualitative feedback:
- Run a quarterly 10-minute survey of 15-20 key stakeholders asking about awareness, perception, and information sources
- Include "I saw [company name] in media coverage recently" as a yes/no question with a follow-up: "Did this affect your perception of the company?"
- Track responses over time — trend data from 4+ quarters is more persuasive than a single snapshot
Kantar and Ipsos both offer perception tracking services for larger budgets. For smaller teams, a well-designed SurveyMonkey or Qualtrics survey sent to a curated stakeholder list works.
Attribution Maturity by Team Size
| Team Size | Realistic Attribution Level | |---|---| | 1-2 person team | Referral traffic + message pull-through + share of voice trend | | 3-5 person team | Add branded search correlation + stakeholder survey + CRM integration | | 6+ person team or agency | Add multi-touch modelling + predictive analytics + integrated marketing dashboard |
Do not build attribution infrastructure you cannot maintain. A small team tracking three metrics consistently is more credible than a large team with a complex dashboard that is updated sporadically.
The Bottom Line
Honest attribution builds more trust than inflated claims. A CFO who sees you acknowledge limitations will take your contribution claims more seriously. A CFO who catches you overstating impact will discount everything you report — including the metrics you can legitimately prove.