Advertisement

Most UK corporate crises do not arrive without warning. They send signals -- sometimes days, sometimes weeks before the story breaks. The Post Office Horizon scandal had warning signals in Computer Weekly for years before ITV's Mr Bates vs The Post Office brought it to mass attention in January 2024. The Carillion collapse was preceded by months of trade press commentary on contract margins. The challenge is not that signals are invisible; it is that comms teams are not structured to detect them early enough.

The six early warning signal types

Each signal type has a different lead time and a different detection method. Your monitoring setup needs to cover all six.

1. Trade and specialist press rumblings (lead time: weeks to months)

Stories that eventually become national news almost always appear first in trade outlets. Construction News, Insurance Times, Health Service Journal, Citywire, The Grocer, and sector-specific publications are where journalists with deep domain knowledge publish early investigations.

What to monitor: Set dedicated alerts in Meltwater or Signal AI for your brand, key executives, and top risk topics across a curated list of 15-20 trade titles relevant to your sector. Do not rely on general monitoring to catch these -- trade publications are often poorly indexed by platforms unless you specifically add them.

2. Regulatory signals (lead time: days to weeks)

FCA warning notices, CMA phase 1 investigation announcements, ICO reprimands, Ofcom bulletins, and ASA rulings follow a predictable publication cadence. These are public documents that journalists monitor directly.

What to monitor: RSS feeds or alerts from regulator websites for your brand and sector terms. FCA publishes enforcement actions on a regular schedule. CMA publishes case pages. Set up Google Alerts as a belt-and-braces backup. In your monitoring platform, create a dedicated "Regulatory" query group that combines your brand with all relevant regulator names and enforcement terms.

3. Parliamentary and political signals (lead time: days to weeks)

Written parliamentary questions, select committee hearing agendas, Early Day Motions, and ministerial statements can signal that your organisation is about to receive political scrutiny. If an MP tables a question about your sector, journalists who cover that beat will be looking for a story.

What to monitor: Hansard search alerts for your brand and key issues. TheyWorkForYou.com provides email alerts for mentions in Parliament. These are free and take 5 minutes to set up.

4. Social velocity anomalies (lead time: hours to days)

A sudden, sustained increase in social mentions -- especially when accompanied by negative sentiment and cross-platform spread -- often precedes editorial coverage by 12-48 hours. Journalists increasingly source stories from social platforms, particularly X/Twitter and Reddit.

What to monitor: Velocity alerts in Brandwatch or Pulsar (3x baseline over a rolling 4-hour window). Pay particular attention to mentions from accounts with journalist bios, verified media accounts, or accounts with 50k+ followers. A single tweet from a Guardian or BBC journalist asking "has anyone else experienced [issue] with [your brand]?" is a near-certain indicator that editorial coverage is being researched.

5. Employee and insider signals (lead time: days to weeks)

Glassdoor review spikes, Reddit posts in sector-specific subreddits, LinkedIn posts from current or former employees, and anonymous tips to journalists via platforms like SecureDrop all represent insider knowledge leaking externally.

What to monitor: Weekly Glassdoor score tracking (you can do this manually in 5 minutes or use a tool like Brandwatch's review monitoring). Reddit alerts for your brand name in relevant subreddits. LinkedIn keyword alerts for your company name + terms like "disappointed," "leaving," "culture," "toxic."

6. Cross-channel convergence (lead time: hours)

The highest-confidence early warning signal is when multiple channels light up simultaneously: trade press publishes a critical piece, social velocity spikes, and a regulator or politician mentions the topic. When three or more signal types converge within a 24-hour window, treat it as a pre-crisis condition regardless of the individual signal strength.

Building an early warning dashboard

Your early warning system should be a single dashboard view (in Meltwater, Signal AI, or a custom Power BI / Looker Studio build) with one panel per signal type. Each panel shows:

  • Current status: Green (within baseline), Amber (1.5-2.5x baseline), Red (above 2.5x baseline or any Tier 3 trigger)
  • Last 7 days trend line
  • Most recent significant mention with source and timestamp

The dashboard should be checked twice daily: at 08:00 and 16:00. During active monitoring periods (announced results, regulatory decisions, planned announcements), increase to every 2 hours.

Measuring time-to-detect and time-to-escalate

Two metrics determine whether your early warning system is working:

Time-to-detect (TTD): The interval between the first public signal appearing and your team logging it. Target: under 2 hours during business hours, under 6 hours outside hours.

Time-to-escalate (TTE): The interval between detection and the head of comms being briefed (for signals that warrant escalation). Target: under 1 hour during business hours, under 3 hours outside hours.

Track these for every significant signal over a quarter. If your average TTD exceeds 4 hours, your alert configuration or triage process has a gap. If TTE exceeds 2 hours, your escalation path has a bottleneck -- usually an unclear ownership or a missing on-call rota.

Setting up a signal log

Maintain a running signal log (shared Google Sheet or Confluence page) with the following columns:

| Date/time | Signal type | Source | Summary | Severity | Action taken | Outcome | |-----------|------------|--------|---------|----------|-------------|---------| | 14 Jan 09:22 | Trade press | Insurance Times | Article questioning claims handling delays | Amber | Briefing note prepared, monitoring increased | No further escalation, volume returned to baseline by 16 Jan |

This log serves three purposes: it creates an audit trail, it builds institutional memory for calibrating thresholds, and it provides evidence for post-incident reviews.

Common mistake: monitoring only your own brand

A UK asset management firm had comprehensive early warning monitoring for its own brand name and products. When a competitor fund collapsed due to liquidity issues, the firm was caught off guard by contagion coverage -- the FT and Guardian ran sector-wide pieces questioning whether other funds had similar risk exposure. The firm's name appeared in these pieces by association, not by direct mention, and the brand-only monitoring missed it entirely.

The fix: Include 3-5 sector-level queries in your early warning system alongside brand-specific ones. Monitor for "[your sector] + crisis/investigation/failure/collapse" and for your top 3-4 competitors' names + risk terms. A competitor crisis is often a 48-hour warning that the same scrutiny is coming your way.

Quarterly calibration

Every quarter, review:

  • False positive rate per signal type (target: under 30% for Amber signals, under 10% for Red signals)
  • Signals that were missed and only detected after they appeared in national media
  • Whether trade publication and regulator source lists are still current
  • Whether the on-call rota and escalation contacts are up to date
  • Any new signal sources that should be added (new trade publication, new regulatory body, new social platform gaining traction in your audience)

Early warning monitoring is a system, not a project. It requires ongoing maintenance, and the payoff is measured in crises averted or contained early rather than crises survived.

Advertisement