Most media monitoring RFPs ask the wrong questions. They ask "how many sources do you cover?" (every vendor says millions) and "do you support sentiment analysis?" (every vendor says yes). These questions produce identical responses and tell you nothing about which vendor will actually work for your team. The questions below are designed to surface real differences in coverage, accuracy, workflow, and support -- the things that determine whether a GBP 20,000-80,000 annual investment delivers value or creates headaches.
Coverage and source quality
Questions to ask:
"Provide your full UK source list, broken down by: national dailies, national Sundays, regional dailies, consumer magazines, trade/sector titles, broadcast channels, and online-only outlets."
This is the single most important question. Ask for the actual list, not a count. Meltwater, Cision, and Signal AI all claim extensive UK coverage, but the gaps show up in the detail -- specific trade titles, regional papers, and niche sector publications. If Insurance Times, Health Service Journal, Construction News, or The Grocer matters to your sector and the vendor does not index it, that is a deal-breaker you need to find before signing.
"How quickly are new articles indexed after publication? Provide the median and 95th percentile for UK national dailies, trade press, and broadcast transcripts."
Acceptable benchmarks: national dailies should be indexed within 30 minutes of online publication. Trade press within 2 hours. Broadcast transcripts within 4 hours of airing. If the vendor cannot provide these numbers, they have not measured them.
"Which UK broadcast channels do you cover, and do you provide full transcripts, keyword-triggered clips, or both? What is the additional cost for broadcast?"
Broadcast is often sold as an add-on. Get the cost upfront. Ask specifically about BBC, ITV, Channel 4, Sky News, and LBC. If Today programme (Radio 4) transcripts are not included, you will have a significant gap in policy-sensitive monitoring.
"How do you handle paywalled content from outlets like the FT, Times, and Telegraph?"
Some vendors provide full-text access through licensing agreements (NLA in the UK). Others provide headline and snippet only. If your stakeholders need to see full articles from the FT and Times, confirm the licensing terms and any per-article costs.
Sentiment and analysis accuracy
"What is the accuracy rate of your automated sentiment classification for UK English language content? Provide precision and recall figures, and explain how you handle sarcasm, British understatement, and sector-specific terminology."
Most vendors claim 70-85% sentiment accuracy. Push for the methodology: is it dictionary-based, machine learning, or LLM-powered? Ask for a test against your own recent coverage -- provide 50 articles and compare the vendor's sentiment scores against your team's manual assessment. A gap of more than 15% on any sentiment category is a red flag.
"Can we create custom sentiment rules or override automated scores?"
This matters because automated sentiment regularly misclassifies sector-specific content. An article saying "the FCA has cleared [your company] of wrongdoing" might score as negative due to the presence of "FCA" and "wrongdoing," even though the actual sentiment is positive. You need the ability to correct this.
Alerting and workflow
"Walk us through the alert configuration: what trigger conditions can we set, how granular are the filters, and what delivery channels do you support (email, SMS, push notification, Slack, Teams, API webhook)?"
Run a live demo with your actual brand queries. Ask the vendor to set up a test alert and show you the end-to-end flow from article publication to alert delivery. Time it.
"How do you handle Boolean query building? Is there a query builder interface, or is it manual syntax? What is the maximum query complexity supported?"
If your brand name collides with common words (think Shell, Next, Sage, or Arm), you need complex Boolean with proximity operators, exclusion lists, and source filters. Ask the vendor to build a test query for your most problematic brand name and evaluate noise levels.
"Can we create and manage exclusion lists for irrelevant sources, topics, or authors? How easy is it to update these over time?"
Exclusion management is a daily operational task. If it requires a support ticket to add an exclusion, the platform will not keep up with your noise reduction needs.
Data access and integration
"What data can we export? In what formats (CSV, JSON, Excel, API)? Is there a per-export fee or a data cap?"
Some vendors restrict data exports or charge per API call. If you plan to feed monitoring data into Power BI, Looker Studio, or a data warehouse, API access with no per-call fee is essential. Confirm the fields available in exports: article text, metadata, sentiment, tags, outlet, author, reach estimate.
"Do you provide an API? What are the rate limits, authentication methods, and documentation quality?"
Ask to see the API documentation before signing. If it is poorly documented or the rate limits are restrictive (e.g., 100 calls per day), integration will be painful.
"Who owns the data we generate in the platform (tags, scores, annotations, custom fields)? Can we export it all if we leave?"
Data portability is critical. If you spend 18 months tagging and scoring coverage, you should be able to take that enrichment with you. Get this in writing.
Support and onboarding
"What does onboarding look like? How long until we are fully operational? Who is our dedicated contact?"
Acceptable timeline: 2-4 weeks for basic monitoring, 6-8 weeks for full integration including custom dashboards and API setup. If the vendor says "same day," they are underselling the configuration effort required.
"What are your SLAs for support response? Differentiate between routine requests and urgent issues (e.g., alerts not firing, platform down)."
For a UK-based team, confirm the support hours are UK-friendly. A vendor whose support team is US-Pacific time will not help you at 08:00 GMT when your morning alert fails.
"Do you provide a dedicated UK account manager, or is support ticket-based?"
For contracts above GBP 30,000/year, a named account manager is reasonable to expect. Below that threshold, at least confirm you have access to a UK-based support team.
Pricing and contract terms
"Provide a full cost breakdown: platform fee, user licences, broadcast add-on, API access, NLA/CLA licensing, premium alerts, and any per-article or per-clip charges."
Hidden costs in media monitoring contracts are common. The most frequent surprises: broadcast clip charges (GBP 15-50 per clip), NLA compliance levies, and API overage fees. Get every line item in writing before procurement review.
"What are the contract term options, renewal notice periods, and early termination terms?"
Avoid auto-renewal clauses that require 90+ days notice. A 30-day notice period is reasonable. If the vendor insists on a 3-year term, negotiate a break clause at 12 months.
Common mistake: evaluating on the demo, not the data
A UK professional services firm chose a monitoring vendor based on a polished demo that showcased AI-powered dashboards and real-time analytics. In the first month of the contract, they discovered the vendor did not index three of their five priority trade titles, sentiment accuracy was below 60% for financial services content, and the promised "real-time" alerts had a 4-hour delay on broadcast transcripts. The firm spent 6 months negotiating improvements before eventually switching vendors -- at significant cost and disruption.
Run a parallel pilot with your actual queries before signing. Two weeks of real data is worth more than two hours of demo.