Advertisement

Share of voice (SOV) is one of the most requested metrics in UK comms. It is also one of the most routinely butchered. The maths is simple -- your mentions divided by total mentions in a defined set -- but the inputs are where teams get into trouble. Bad SOV numbers do not just waste a slide; they lead to wrong budget decisions and misplaced confidence.

Define a stable, honest comparison set

The first error is comparing apples to shipping containers. Your SOV set must include organisations that genuinely compete for the same media attention in the same space. A mid-cap UK insurer should not benchmark against Aviva and AXA unless it actually competes with them for column inches in the FT, Insurance Times, and trade press.

Rules for building the set:

  • Pick 4-6 direct competitors. Fewer than 3 makes SOV meaningless; more than 8 dilutes the signal.
  • Lock the set for a full quarter. Changing competitors month to month makes trend data useless.
  • Document the rationale. When the CMO asks why Company X is not included, you need an answer beyond "we forgot."
  • If a competitor is acquired or delisted, note the date and adjust -- do not silently drop them.

In Meltwater, Cision, or Signal AI, save the competitor set as a named dashboard or search group so the query is reproducible and auditable.

Separate volume SOV from quality-weighted SOV

Raw volume SOV counts every mention equally. A passing reference in a regional daily scores the same as a 600-word feature in the Sunday Times. That is mathematically correct and editorially absurd.

A practical three-tier weighting model:

| Tier | Definition | Examples | Weight | |------|-----------|----------|--------| | 1 | National broadsheets, broadcast flagship | FT, Guardian, BBC Today, Sky News, Times | x3 | | 2 | Major trade, quality digital, national tabloid | PR Week, Citywire, Insurance Journal, Daily Mail | x2 | | 3 | Regional, wire services, low-reach digital | PA Media pickups, regional dailies, aggregators | x1 |

Report both raw and weighted SOV. Raw SOV shows noise. Weighted SOV shows influence. When the two diverge -- say you have 35% raw SOV but only 18% weighted -- that tells you the coverage is wide but shallow.

Most platforms (Meltwater, Brandwatch, Signal AI) let you tag outlets by tier and filter dashboards accordingly. If your tool does not support custom outlet scoring, export to a spreadsheet and apply the weights there. It takes 20 minutes per month and is worth every one of them.

Stop mixing sentiment into the SOV number

A common mistake: a team reports "40% positive share of voice" by filtering to positive mentions only, then calculating SOV on that subset. This conflates two separate metrics and makes neither reliable.

What actually happens: You had 200 total mentions. Your competitor had 300. But 120 of yours were positive vs only 90 of theirs. Your raw SOV is 40% (200/500). Your "positive SOV" is 57% (120/210). The 57% looks great in a board pack but hides the fact that you are being outmentioned overall.

Report SOV and sentiment as separate rows. If you want to combine them, use a composite index with explicit methodology -- and put the formula in the appendix.

Account for category events that distort the baseline

In Q4 2024, several UK financial services firms saw their SOV collapse not because coverage dropped but because a single competitor was involved in an FCA enforcement action that generated hundreds of mentions in a week. That competitor's SOV spiked to 60%+, pushing everyone else down mechanically.

What to do:

  • Flag outlier events in your commentary. A one-week spike from a regulatory action is not a competitive trend.
  • Consider reporting SOV with and without the outlier event so leadership can see both pictures.
  • In Meltwater or Cision, use date-range exclusions or event tags to isolate the effect.

If you report the dip without context, you will spend the next board meeting explaining why you "lost ground" when nothing actually changed about your performance.

Use rolling averages, not point-in-time snapshots

Monthly SOV is volatile. A single feature piece or crisis mention can swing the number by 10 percentage points. Rolling 13-week (quarterly) averages smooth the noise and reveal actual trends.

Practical setup:

  • Report weekly SOV in your operational dashboard for the comms team.
  • Report 13-week rolling SOV in the executive dashboard.
  • Show the quarter-on-quarter direction with a simple arrow or percentage-point change.

A useful benchmark: in competitive UK sectors like financial services, energy, and telecoms, a 2-3 percentage point quarterly shift is meaningful. Anything under 1 point is noise unless it persists for two consecutive quarters.

The common mistake that wastes the most time

A UK healthcare PR team spent six months reporting SOV against seven competitors using Cision, but each month the analyst tweaked the Boolean queries to "improve accuracy." The result: the underlying data set shifted every period, making the trend line meaningless. When the head of comms presented a 12-month SOV chart to the board, a non-executive asked why the numbers did not match the previous quarter's report. The team had no answer because the methodology had quietly drifted.

The fix is boring but essential: version-control your queries. Save the exact Boolean string, the source list, and the date range for each reporting period. When you change a query, start a new trend line -- do not retrofit the old one.

Checklist: SOV reporting that holds up

  • [ ] Competitor set is documented and locked for the quarter
  • [ ] Raw and quality-weighted SOV are reported separately
  • [ ] Sentiment is a separate metric, not baked into SOV
  • [ ] Outlier events are flagged in commentary
  • [ ] Rolling averages are used for executive reporting
  • [ ] Boolean queries are version-controlled with a changelog
  • [ ] Methodology note is included in every report or appendix

Get these right and SOV becomes a metric that drives decisions rather than debates.

Advertisement