Advertisement

A social listening taxonomy is the classification structure you apply to conversations captured by tools like Brandwatch, Pulsar, Meltwater Social, or Sprinklr. Done well, it turns a firehose of mentions into categorised insight that feeds weekly reports and strategic decisions. Done badly -- which is the norm -- it becomes a 60-tag monster that nobody applies consistently and that produces charts no one trusts.

Start with 5-8 top-level themes, not 30

The most common failure is over-engineering the taxonomy at launch. A UK retail bank built a 45-tag taxonomy in Brandwatch covering every conceivable topic from "branch experience" to "ATM availability" to "CEO sentiment." Within six weeks, analysts were spending more time tagging than analysing, and half the tags had fewer than 10 mentions per month -- too few to generate any meaningful trend.

A taxonomy that works for most UK organisations:

| Theme | What it captures | Example queries | |-------|-----------------|-----------------| | Product/service experience | Customer feedback on core offerings | Brand + ("app" OR "service" OR "complaint" OR "broken" OR "love") | | Price and value | Cost concerns, value comparisons | Brand + ("price" OR "expensive" OR "cheap" OR "value" OR "cost") | | Trust and reputation | General brand perception, recommendations | Brand + ("trust" OR "recommend" OR "avoid" OR "reputation") | | People and culture | Employee experience, leadership perception | Brand + ("work for" OR "CEO" OR "staff" OR "culture" OR "Glassdoor") | | Sustainability/ESG | Environmental and social responsibility | Brand + ("green" OR "carbon" OR "ESG" OR "greenwashing" OR "net zero") | | Regulatory and compliance | Mentions alongside regulators or enforcement | Brand + ("FCA" OR "CMA" OR "ICO" OR "Ofcom" OR "fine" OR "investigation") | | Competitor comparison | Direct comparisons with named competitors | Brand + competitor name + ("better" OR "worse" OR "switch" OR "vs") |

Seven themes. Each one maps to a real business question that a head of comms or CMO would ask. If a theme does not connect to a question leadership asks at least quarterly, drop it.

Write tagging rules that a new starter could follow

Each theme needs a one-page rule sheet:

  • Definition: What this theme covers and what it does not. "Product/service experience covers first-hand customer accounts of using our products or services. It does NOT cover journalist reviews, analyst commentary, or competitor product mentions."
  • Inclusion examples: 3-5 real social posts that belong in this theme.
  • Exclusion examples: 3-5 real social posts that look like they belong but do not.
  • Automated query: The Boolean or keyword query in Brandwatch/Pulsar that pre-classifies mentions into this theme.
  • Manual override rules: When the automated classification should be corrected (e.g., sarcasm, double negatives).

If you cannot explain the tagging rule to a new analyst in under 2 minutes, the rule is too complicated. Simplify it.

Automate first, manually adjust second

Modern social listening platforms (Brandwatch Consumer Research, Pulsar TRAC, Meltwater Social) can auto-classify 70-85% of mentions accurately using keyword rules and AI classifiers. Your taxonomy workflow should be:

1. Automated classification catches the majority of mentions and applies the theme tag. 2. Daily manual review (15-20 minutes) where an analyst spot-checks a random sample of 20-30 mentions per theme and corrects misclassifications. 3. Weekly accuracy check -- calculate the percentage of the daily sample that was correctly auto-classified. If accuracy drops below 80% for any theme, revise the query.

A useful benchmark: aim for 85%+ auto-classification accuracy across all themes. Below 75% means your Boolean queries need reworking. Above 90% means you can reduce manual review time.

Sub-themes: add them later, not at launch

Resist the temptation to build sub-themes in week one. Run the top-level taxonomy for 8 weeks, review the data, and then add sub-themes only where the volume and business need justify it.

Example of justified sub-theme expansion:

The "Product/service experience" theme for a UK insurer was generating 400+ mentions per week. Drilling into the data showed three distinct clusters: claims experience, app/digital experience, and phone wait times. Adding these as sub-themes made the insight actionable -- the claims team got a dedicated feed, the digital team got theirs. But this only made sense because the volume was there.

Example of unjustified sub-theme:

The "Regulatory and compliance" theme for the same insurer generated 15-20 mentions per week. Splitting this into "FCA," "FOS," and "ICO" sub-themes created three buckets with 5-7 mentions each -- too few for trend analysis and not worth the classification effort.

Rule of thumb: do not create a sub-theme unless it will contain at least 50 mentions per month.

Governance: one owner, quarterly reviews

Assign a single taxonomy owner. This person approves any changes to themes, sub-themes, or tagging rules. Without a single owner, you get taxonomy drift -- different analysts interpret rules differently, new themes get added without documentation, and old themes linger with zero mentions.

Quarterly review agenda (45 minutes):

  • Review mention volume per theme. Drop any theme averaging fewer than 20 mentions per month.
  • Review auto-classification accuracy per theme. Rework queries below 80%.
  • Check for emerging topics not captured by current themes. Add a new theme only if it maps to a business question.
  • Review the changelog (every taxonomy change should be logged with date, reason, and who approved it).

Connecting the taxonomy to reporting

A taxonomy only adds value if it shows up in reports that people read. Each theme should produce at least one of these outputs:

  • Weekly trend line in the social listening dashboard (volume over time, sentiment split)
  • Monthly highlight in the executive report (top theme, biggest shift, recommended action)
  • Alert trigger for high-velocity negative themes (e.g., "Regulatory and compliance" mentions spike 3x baseline -- escalate)

If a theme never appears in any report or alert, it is dead weight. Remove it at the next quarterly review.

Common mistake: taxonomy does not match the media monitoring taxonomy

A UK energy company used one set of themes in Brandwatch (social listening) and a completely different set in Meltwater (media monitoring). When the head of comms asked "what are people saying about our net zero commitments?" the social team reported under "Sustainability" and the media team reported under "ESG & Climate." The numbers did not match, the framing was different, and the resulting board paper contradicted itself.

Use the same top-level themes across social listening and media monitoring. The queries will differ (social language is informal, media language is editorial), but the theme names and definitions should be identical. This is non-negotiable if you produce cross-channel reports.

Advertisement