The value of media monitoring collapses when people stop trusting the data. It only takes one missed FT article or one week of noisy alerts for an executive to dismiss the daily brief as unreliable. A QA routine -- 30 minutes per week plus a quarterly deep review -- prevents that trust erosion and keeps your monitoring output tight.
QA metric 1: Coverage completeness
Coverage completeness answers the question: are we catching everything we should be?
The missed-coverage log
Create a shared spreadsheet (Google Sheet or SharePoint) with these columns:
| Date | Outlet | Headline | How discovered | Why missed | Fix applied | |------|--------|---------|----------------|-----------|-------------| | 7 Jan | Insurance Times | "FCA warns on claims delays" | Spotted by comms manager on Twitter | Trade title not in Meltwater source list | Requested source addition, confirmed 12 Jan | | 14 Jan | BBC Radio 4 Today | CEO interview mention | CEO's EA flagged it | Broadcast monitoring only active during crisis periods | Added Today to always-on transcript list |
Every team member should know this log exists and be able to add entries. Review it weekly during the Monday morning check-in.
Benchmark: If you are logging more than 3 missed items per month from Tier 1 and Tier 2 outlets, your source list or queries need immediate attention. Zero missed items from the FT, Guardian, Times, Telegraph, BBC, and Sky News is the minimum standard. Missing one of those is a system failure, not a minor gap.
Proactive coverage checks
Do not rely only on the log. Once a week, run a manual check:
1. Search Google News for your brand name and top issue terms. Compare the first 20 results against what your platform captured. 2. Check the FT, Guardian, and BBC websites directly for your brand name. If anything appears that is not in your monitoring dashboard, investigate why. 3. For broadcast, check the BBC Sounds and iPlayer listings for your priority shows. Cross-reference any brand-relevant segments against your transcript feed.
This takes 15 minutes and catches gaps that the passive log misses.
QA metric 2: Noise (false positive rate)
Noise means irrelevant results appearing in your feed. Every false positive wastes analyst time and dilutes reporting accuracy.
Measuring noise
Once a week, pull a random sample of 50 results from your primary brand query. Classify each as:
- Relevant: Substantive mention of your brand in the context you care about.
- Marginal: Your brand is mentioned but in passing, or the context is tangential. (E.g., your company listed in a generic industry roundup.)
- Irrelevant: The result has nothing to do with your brand. (Common cause: brand name collision. "Shell" as a company vs "shell" as an object.)
Noise rate = Irrelevant / Total sample.
| Noise rate | Assessment | Action | |-----------|-----------|--------| | Under 10% | Good | Maintain current queries | | 10-20% | Acceptable but monitor | Review exclusion list monthly | | 20-35% | Problematic | Refine Boolean queries this week | | Over 35% | Broken | Stop reporting from this query until fixed |
Common noise sources and fixes
- Brand name collision: Add exclusion terms. If monitoring "Sage" (the software company), exclude "sage advice," "sage green," "sage herb." In Meltwater or Cision, use source-type filters to exclude recipe sites and lifestyle blogs.
- Irrelevant geography: If you only operate in the UK, add geographic filters or exclude non-UK outlet categories.
- Aggregator duplicates: PA Media wire stories get republished across dozens of regional sites. A single story can appear as 40 separate results. Configure de-duplication in your platform (Meltwater and Signal AI both support this) or filter to original source only.
- Archived content resurfacing: Some outlets re-index old articles when they update their websites. Use date filters to exclude content older than 48 hours from alert feeds.
QA metric 3: Tagging accuracy
Tags drive every downstream report -- share of voice, sentiment trends, message pull-through, issue tracking. If tags are wrong, reports are wrong.
Weekly spot-check
Pull 30 articles from the previous week. For each, verify:
- Topic tag: Is the article tagged to the correct theme in your taxonomy?
- Sentiment: Does the automated sentiment score match a human assessment? (Disagree on a positive/negative call counts as an error; neutral disagreements are marginal.)
- Outlet tier: Is the outlet correctly classified as Tier 1, 2, or 3?
- Spokesperson tag: If a company spokesperson is quoted, is their name tagged?
Accuracy target: 85%+ across all four dimensions. If topic tagging drops below 80%, review your automated classification rules. If sentiment accuracy drops below 75%, consider switching to human-validated sentiment for Tier 1 coverage.
Who does the spot-check
The person who does the tagging should NOT be the person who audits it. If you are a one-person team, alternate between doing the tagging during the week and auditing a random sample on Friday. If you have two or more analysts, cross-check each other's work.
QA metric 4: Alert reliability
Alerts are the time-critical output of monitoring. A missed alert on a negative BBC story has more operational impact than a missed tag on a trade article.
Alert audit (monthly)
For one week, log every alert that fires:
| Time sent | Trigger | Outlet | Relevant? | Response time | |-----------|---------|--------|-----------|--------------| | 08:14 | Brand + BBC | BBC News online | Yes | 12 min | | 09:32 | Brand + FCA | Citywire | Yes | 45 min | | 11:47 | Brand name | Recipe blog | No (name collision) | N/A |
Review the log at month end. Calculate:
- Alert precision: % of alerts that were relevant. Target: 80%+.
- Alert speed: Median time from article publication to alert delivery. Target: under 30 minutes for national outlets.
- Response time: Median time from alert delivery to analyst acknowledgement. Target: under 30 minutes during business hours.
If alert precision drops below 70%, your alert queries need tightening. If alert speed exceeds 1 hour for Tier 1 outlets, raise a support ticket with your vendor.
Quarterly deep review
Once per quarter, run a comprehensive review covering:
- Source list audit: Remove outlets that never generate relevant coverage. Add outlets that appeared in the missed-coverage log more than twice. Check that all UK nationals, top 30 trade titles, and priority broadcast programmes are indexed.
- Query audit: Review every active Boolean query. Are exclusion lists current? Have any new brand name collisions emerged? Are proximity operators still appropriate?
- Vendor performance review: Compare the vendor's coverage, alert speed, and sentiment accuracy against the SLAs in your contract. Document any gaps and raise with your account manager.
- Taxonomy review: Are all tags still in use? Are any new tags needed based on business changes? (New product, new market, new executive.)
Common mistake: QA that is too ambitious to sustain
A UK utility company designed a QA programme requiring daily audits of 100 articles, weekly Boolean query reviews, and monthly vendor scorecards. The analyst responsible lasted three weeks before the routine collapsed. The QA programme was technically excellent but operationally impossible for a team of two.
A sustainable QA routine is: 15-minute weekly spot-check of 30-50 items, a missed-coverage log that the whole team feeds, and a 90-minute quarterly deep review. Consistency matters more than thoroughness. A modest QA routine that runs every week beats a comprehensive one that was abandoned in month two.