Home / Resources / Blog / The 98% You Never Hear
Quality & QA April 21, 2026 8 min read

Contact Center Quality Assurance: The 98% You Never Hear

Most contact centers run quality assurance on 2-5% of calls. The other 98% is where compliance violations live, where churn signals hide, and where the real coaching opportunities sit. Here's what we've learned from turning the lights on.

JI
John Iosifov
Editor · Ender Turing Blog
Share
98%
Unmonitored

The 2% you scored is not the 98% nobody heard.

Manual sampling captures a sliver. The rest is where compliance violations, churn signals, and revenue intelligence live.

2% sampled by manual QA
98% never analyzed

We analyzed onboarding calls for a European lending company last year. Their contact-center quality assurance team was scoring five calls per agent per month. Good scores across the board. Compliance looked clean.

Then we turned on 100% monitoring. Within the first week, we found that 23% of agents were skipping mandatory risk disclosures on loan products. Not sometimes. Routinely. The QA team had been scoring the same five calls where agents knew they were being watched. The other 4,500 calls per month? Nobody heard them.

That gap between what you sample and what actually happens is where compliance violations live, where churn signals hide, and where the revenue intelligence sits that your CRM will never capture. This is the core problem with traditional contact-center quality assurance: you're making decisions from 2% of the data and hoping the other 98% looks the same. It doesn't.

Contact Center Quality Assurance by the Numbers: Why 2% Sampling Fails

Here's why the sampling model breaks down. A typical contact center handles between 2,000 and 50,000 calls per month. QA teams manually review 2-5 calls per agent per month. In a 200-seat center running 20,000 monthly calls, that's 400-1,000 reviews. At best, 5% coverage. At worst, 2%.

That's not quality assurance. That's a lottery.

The statistical problem is severe. With a 2% sample, you need a compliance violation to occur in roughly 1 in 50 calls for your QA team to have a reasonable chance of catching it in any given month. If the violation rate is 5% (common for soft compliance issues like missed disclosures), your manual QA program has about a 10% chance of flagging it for a specific agent.

And the people doing the reviews? They're inconsistent. McKinsey found that manual QA scoring hits 70-80% inter-rater reliability. Two QA analysts listen to the same call and disagree on the score 20-30% of the time. Automated QA systems achieve over 90% accuracy. The machine doesn't have a bad Monday.

What Lives in the 98%

When we deploy 100% call monitoring for new customers, the first 30 days always surface the same patterns. Every single time, without exception.

Compliance drift

Agents who pass manual QA consistently are skipping required disclosures on calls that aren't being monitored. In financial services, this is a ticking regulatory bomb. The FCA's Consumer Duty (fully enforced since July 2024) now requires firms to monitor outcomes across all customer interactions, not just samples.

Revenue signals nobody hears

Cross-sell opportunities mentioned by customers. Churn warnings expressed in frustration patterns. Product feedback that never makes it to the product team. McKinsey estimates contact centers drive 25% of new revenue for credit-card companies and 60% for telecom. But that revenue intelligence sits in conversations nobody analyzes.

Agent gaming

Call avoidance patterns. Handle-time manipulation. Cherry-picking easy calls. These behaviors are invisible in a 2% sample because agents know when they're being scored. In behavioral economics, this is called the Hawthorne effect. And it's rampant in contact centers.

Coaching blind spots

Your best agent closes 3x more upsells than average. What do they say differently in the first 90 seconds? With 2% sampling, you'll never know. With 100% analysis, you can extract the exact phrases, tonality patterns, and conversation structures that separate top performers from the rest.

You can't audit what you don't monitor. "We listen to five calls a month" is not an answer in 2026.

The Compliance Pressure Is Accelerating

Three regulatory shifts are making 2% monitoring untenable. Not in five years. Now.

PCI DSS 4.0.1 (mandatory since March 2025)

The new standard requires that any recording capturing sensitive authentication data after authorization generates a control failure. Traditional pause-and-resume recording no longer qualifies in environments where card numbers are spoken aloud.

EU AI Act high-risk obligations (effective August 2026)

AI systems used in contact centers for automated performance monitoring or employment decisions are classified as high-risk. Deployers must retain system logs, conduct fundamental rights impact assessments, and meet transparency obligations.

SEC and CFTC off-channel enforcement

Since December 2021, regulators have levied nearly $3.6 billion in penalties against financial firms for record-keeping failures. The message from regulators is clear: if you can't prove you monitored it, you're liable for it.

From 2% to 100%: What Actually Changes

Here's what we've seen across our own deployments at Ender Turing when customers make the switch.

Week 1: The shock

QA scores that looked healthy under sampling drop 15-25% when every call is scored. Leaders realize they've been flying blind, and the recalibration is uncomfortable but necessary.

Month 1: Pattern recognition kicks in

With full data, you start seeing things that sampling hides. Which product generates the most confused calls. Which shift has the highest compliance drift. These are systemic patterns that require thousands of data points to see.

Month 3: Coaching becomes targeted

Instead of generic coaching sessions based on a handful of cherry-picked calls, managers can identify specific skill gaps per agent. SQM Group documented up to 600% ROI with payback inside three months.

Month 6: The data compounds

You have enough longitudinal data to spot trends. Agent-attrition risk from conversation patterns. Seasonal compliance drift. This is where contact-center quality assurance stops being about scoring calls and starts being about running the business.

Want to see what 100% monitoring looks like on your own conversations?

30-minute call with one of our deployment leads.

Book A Demo

The Mid-Market Trap

Large enterprises are adopting fast. Speech-analytics deployment sits at roughly 44% across the industry. Future Market Insights found that 55.6% of the $25.3 billion conversation-intelligence market goes to large enterprises. Mid-market companies (200-1,000 seats) are dramatically underserved.

The reason is straightforward. First-generation speech-analytics platforms were built for 5,000-seat deployments. A 300-seat center can't justify that investment or absorb that disruption.

But the compliance obligations don't scale with company size. The regulatory pressure is identical. The tooling accessibility was not.

What Contact Center Quality Assurance Actually Requires in 2026

Monitor every interaction

Not 5%. Not 20%. All of them. Voice, chat, email, and bot conversations. If you're not analyzing 100% of interactions, you don't have quality assurance. You have quality sampling.

Score automatically, coach in real time

Automated scoring with over 90% accuracy means QA teams can focus on coaching instead of listening. Real-time alerts catch compliance issues as they happen.

Connect QA to business outcomes

Quality scores in isolation tell you nothing. When quality management connects to CRM data, CSAT results, and revenue outcomes, you can answer the questions that matter.

Build the audit trail

PCI DSS 4.0.1 and the EU AI Act both demand comprehensive logging. This isn't optional compliance overhead - it's the foundation of defensibility when a regulator asks how you monitor quality.

Five Things You Can Do This Week

You don't need a six-month transformation plan. Start here.

  1. Audit your actual coverage. Pull the numbers. How many interactions happened last month? How many did QA review?
  2. Map your compliance exposure. List every regulatory requirement that touches customer conversations.
  3. Calculate the cost of manual QA. Take your QA-team headcount and divide by calls reviewed.
  4. Run a 30-day pilot on 100% of calls. Most automated QA platforms can deploy alongside existing tools.
  5. Connect QA data to one business metric. Pick one: CSAT, FCR, agent attrition, or revenue per call.

The 98% of conversations nobody hears aren't silent. They're full of signals. The only question is whether you're listening.

Tags Quality Assurance Compliance 100% Monitoring Contact Center PCI DSS FCA
JI

About the author

John Iosifov

Editor of the Ender Turing Blog. Writes about contact-center operations, AI quality assurance, and the gap between vendor promises and operator reality.

Ender Turing specialist

Want to see this on your conversations?

Reading is great. Seeing what 100% monitoring looks like on your actual calls is better. 30-minute call with a deployment lead - we'll send back a written outline of what's realistic in your first 90 days.