Advertisement

Quarterly brand trackers from YouGov, Kantar, or Savanta tell you where reputation was. Weekly signal monitoring tells you where it is going. The gap between those two cadences is where reputational damage compounds -- a negative narrative builds for 8 weeks before the next tracker confirms what your monitoring should have caught in week 2.

This article covers the specific signals to track, where to find them, and how to structure a weekly review that takes 30 minutes and actually drives action.

The five signal categories that matter

Not all media and social signals carry equal weight for reputation. Focus your weekly monitoring on these five categories, ranked by lead-time (how far ahead of survey movement they typically appear):

1. Regulatory and watchdog mentions (lead time: 4-8 weeks)

FCA enforcement notices, CMA investigation announcements, ICO reprimands, ASA rulings, and Ofcom complaints are the strongest early signals of reputation risk in the UK. These mentions carry authority and get amplified by journalists who cover the sector.

What to track: Any mention of your brand in conjunction with FCA, CMA, ICO, Ofcom, ASA, HSE, or relevant sector regulators. Set up dedicated Boolean alerts in Meltwater or Signal AI -- do not rely on general brand monitoring to catch these.

2. Tier 1 editorial tone shifts (lead time: 2-6 weeks)

A change in how the FT, Guardian, Times, or Telegraph frames your organisation is often the first visible sign of a narrative shift. Watch for:

  • A move from neutral reporting to critical framing (e.g., "the company said" shifting to "the company claimed" or "the company failed to")
  • New journalists picking up your story who were not previously covering your sector
  • Editorial or opinion pieces (not just news) that reference your brand negatively

3. Employee and Glassdoor sentiment (lead time: 2-4 weeks)

Internal reputation problems leak externally. A spike in negative Glassdoor reviews, a trending post on Blind, or employee commentary on LinkedIn often precedes media coverage of workplace culture issues. Track Glassdoor score changes weekly and set alerts for your company name on Reddit (r/UKJobs, sector-specific subreddits).

4. Social conversation velocity (lead time: 1-3 weeks)

A sudden increase in brand mentions on X/Twitter, TikTok, or LinkedIn -- especially when accompanied by negative sentiment -- signals an emerging issue. The volume matters less than the velocity: 200 mentions in 2 hours is a stronger signal than 2,000 mentions spread over a month.

Use Brandwatch or Pulsar to track velocity rather than absolute volume. Set alerts for velocity spikes (e.g., 3x baseline within any 4-hour window).

5. Customer complaint patterns (lead time: 1-2 weeks)

Complaints to the Financial Ombudsman Service (FOS), Trustpilot review spikes, and customer service escalation volumes are real-time reputation indicators. A 40% week-on-week increase in negative Trustpilot reviews is a signal your comms team should know about before it becomes a Guardian consumer story.

Setting baselines and thresholds

Signals are meaningless without baselines. Before you can say "this is unusual," you need 8-12 weeks of normal-state data for each signal category.

How to set baselines:

  • Pull 12 weeks of historical data from your monitoring platform (Meltwater, Cision, Signal AI).
  • Calculate the weekly average and standard deviation for each signal category.
  • Set three threshold levels:

| Level | Threshold | Action | |-------|-----------|--------| | Watch | 1.5x baseline | Note in weekly review, no escalation | | Investigate | 2.5x baseline | Analyst reviews within 24 hours, briefing note prepared | | Escalate | 4x baseline or any Tier 1 negative editorial | Head of comms notified same day, holding statement reviewed |

These thresholds should be calibrated to your sector. Financial services and energy companies will have higher ambient noise levels than, say, a professional services firm.

The 30-minute weekly review

Run this every Monday morning. One person owns the process; the output is a one-page summary shared with the comms leadership team by 10am.

Minute 0-10: Dashboard scan. Open your Meltwater/Signal AI/Brandwatch dashboard. Check each signal category against baseline thresholds. Flag anything at Watch level or above.

Minute 10-18: Context check. For any flagged signals, determine whether there is an obvious cause (planned announcement, competitor news, sector event). Cross-reference with the comms calendar and the news diary.

Minute 18-25: Write the summary. One paragraph per flagged signal: what it is, what likely caused it, and whether it requires action. If nothing is flagged, state that explicitly -- "All five signal categories within baseline range. No action required."

Minute 25-30: Assign actions. If any signal is at Investigate or Escalate level, assign an owner and a deadline for the next step. Document in the same shared tracker every week.

Cross-checking signals for confidence

Single-channel signals can be misleading. A spike in social mentions might be driven by a single viral post that burns out in hours. A Glassdoor review cluster might be from a single disgruntled team after redundancies.

The confidence level rises when signals align across channels:

  • Regulatory mention + FT coverage + social velocity spike = high confidence, escalate immediately
  • Social spike only, no editorial pickup = medium confidence, monitor for 48 hours
  • Glassdoor dip only, no external coverage = low confidence, note and review next week

This cross-channel logic should be built into your escalation rules, not left to individual judgement each time.

Common mistake: tracking too many signals with no action path

A UK retail bank subscribed to Brandwatch, Meltwater, and a Glassdoor monitoring tool, generating over 300 weekly data points across 15 signal categories. The comms team dutifully compiled a 6-page weekly report. Nobody read it past page 2, and when a genuine CMA investigation signal appeared, it was buried on page 5 under "regulatory mentions -- miscellaneous."

They cut to 5 signal categories, 3 threshold levels, and a 1-page weekly summary. The CMA signal equivalent now appears as a red flag on line 1. The full report took less time to produce and more people acted on it.

Track fewer signals with clear thresholds and action paths. A signal without an action path is just data decoration.

Advertisement