Advertisement

The difference between a social listening programme that protects reputation and one that just generates noise is the escalation model. Most UK comms teams have alerts set up in Brandwatch, Pulsar, or Meltwater Social, but the alerts fire too often, go to too many people, and lack clear rules for what happens next. The result is alert fatigue -- the one time a genuine signal appears, it gets lost in the daily clutter.

Design alerts around impact, not volume

Volume-based alerts ("notify me when mentions exceed 100 per hour") are the default in most platforms and the main source of fatigue. A viral meme, a celebrity name-drop, or a trending hashtag can spike volume without any reputational risk.

Better alert triggers:

| Trigger type | Definition | Example | |-------------|-----------|---------| | Velocity spike | Mentions increase 3x above 4-hour rolling baseline | Brand mentions jump from 40/hour to 140/hour between 14:00-18:00 | | High-authority source | Mention from a verified journalist, MP, regulator account, or 100k+ follower account | FT journalist tweets about your company with a link to an upcoming story | | Negative sentiment cluster | 5+ negative mentions within 30 minutes from distinct accounts on the same topic | Multiple customers reporting the same service outage on X/Twitter | | Coordinated pattern | Multiple accounts posting similar language within a short window (potential campaign) | Identical phrasing across 20+ accounts in 2 hours -- may indicate organised activity | | Regulator/watchdog mention | Brand mentioned alongside FCA, CMA, ICO, Ofcom, ASA, or HSE on social | Consumer tweets tagging @TheFCA about your product with "complaint" |

Configure these in your platform. Brandwatch allows custom alert rules combining volume thresholds with sentiment filters and source authority. Pulsar TRAC can detect velocity anomalies automatically. Meltwater Social supports Boolean-based alerts with reach thresholds.

Three-tier escalation model

Not every alert needs the same response. Use three tiers to prevent over-reaction to noise and under-reaction to genuine risk.

Tier 1: Monitor (analyst level)

Trigger: Single alert fires, velocity spike from low-authority sources, or negative cluster under 10 mentions.

Action: The monitoring analyst logs the alert, checks context (is this organic or triggered by external news?), and notes it in the daily log. No escalation unless the signal grows.

Timeline: Assessed within 30 minutes during business hours. Outside hours, assessed at next check-in.

Who is notified: Monitoring analyst only.

Tier 2: Investigate (comms manager level)

Trigger: Velocity spike sustained for 2+ hours, high-authority source mention, negative cluster exceeding 20 mentions, or any regulator/watchdog mention.

Action: Comms manager reviews the alert, cross-checks against media monitoring (has this appeared in editorial coverage?), and prepares a one-paragraph briefing note. If the signal has potential to escalate, a draft holding statement is pulled from the crisis playbook.

Timeline: Assessed within 1 hour during business hours. Outside hours, SMS notification to on-call comms manager, assessed within 2 hours.

Who is notified: Comms manager, monitoring analyst.

Tier 3: Escalate (head of comms / crisis team)

Trigger: Velocity spike exceeding 5x baseline sustained for 4+ hours, Tier 1 media outlet pickup (Guardian, BBC, FT, Sky News), coordinated campaign pattern, or any mention from a regulator's official account.

Action: Head of comms is briefed. Crisis protocol activated if applicable. Holding statement reviewed and approved. Monitoring shifts to real-time (15-minute refresh). Legal and senior leadership notified per crisis plan.

Timeline: Head of comms notified within 30 minutes of Tier 3 trigger. First external response (if needed) within 4 hours.

Who is notified: Head of comms, CEO office, legal (if relevant), crisis team.

Alert routing and coverage hours

Business hours (08:00-18:00 Mon-Fri):

  • Tier 1 alerts routed to monitoring analyst via platform notification and email.
  • Tier 2 alerts routed to comms manager via email and SMS.
  • Tier 3 alerts routed to head of comms via SMS and phone call.

Outside hours (18:00-08:00 and weekends):

  • Tier 1 alerts accumulate in the platform for morning triage.
  • Tier 2 alerts routed to on-call comms manager via SMS.
  • Tier 3 alerts routed to on-call comms manager AND head of comms via SMS and phone call.

Maintain a shared on-call rota (Google Sheet or PagerDuty). Update it every Friday for the following week. There should never be ambiguity about who receives the alert outside office hours.

Reducing false positives

Alert fatigue is the enemy. If the team receives more than 3-5 Tier 2 alerts per week that turn out to be false positives, the system needs recalibration.

Common sources of false positives and fixes:

  • Brand name ambiguity. If your brand name is a common word (e.g., "Shell," "Next," "Sage"), add exclusion terms or require co-occurrence with sector-specific keywords. In Brandwatch, use context rules to filter by industry category.
  • Retweet/share cascades. A single viral post getting reshared 500 times is one event, not 500. Configure alerts to de-duplicate by original post where possible.
  • Satire and meme accounts. Exclude known parody and meme accounts from alert triggers. Maintain an exclusion list and update it monthly.
  • Competitor mentions. "I switched from [your brand] to [competitor]" mentions the brand but the conversation is about the competitor. Use sentiment + context filtering to catch this.

Track false positive rates monthly. If Tier 2 false positives exceed 40%, tighten the trigger rules. If Tier 3 false positives exceed 10%, something is fundamentally wrong with the alert design.

Post-incident review process

After any Tier 2 or Tier 3 escalation (whether or not it turned into a real incident), run a 30-minute review within 5 working days.

Review questions:

1. What was the initial signal and when did it fire? 2. How long did it take to reach the right person? 3. Was the tier classification correct, or should it have been higher/lower? 4. Did the social signal correlate with media coverage? If so, which came first? 5. What was the outcome -- did it require external response? 6. Should the alert rules change based on this incident?

Document the answers in a shared log. Over 6-12 months, this log becomes the most valuable asset for calibrating your escalation model. Patterns emerge: certain trigger types consistently escalate, others consistently fizzle.

Common mistake: the alert that cried wolf

A UK telecoms company set up volume-based alerts in Brandwatch with a threshold of 150 mentions per hour. During a Premier League match where their sponsorship was visible on screen, alerts fired continuously for 90 minutes. The on-call comms manager received 11 SMS messages, determined each one was match-related chatter, and turned off SMS notifications. Two weeks later, a genuine customer service outage generated a Tier 3 social storm. The SMS alerts were still off. The head of comms found out via a journalist's call.

Volume-only alerts without sentiment and context filtering are worse than no alerts at all because they train the team to ignore the system. Build multi-factor triggers from day one.

Advertisement