Meet Ender, the AI Quality Assurance Specialist who will join your QA team and help it shine!

What's New — January 2026

What's New — January 2026

This month, we're moving beyond high-level metrics to give you the granular controls and deep analytics needed to drive meaningful performance improvements. January's updates focus on delivering unprecedented precision in your core contact center KPIs, from quality scores and First Call Resolution to agent coaching and QA program coverage. Now you can get to the root cause of issues, make fairer data-driven decisions, and coach your team with more accuracy than ever before.

Executive Summary

Precise Quality Scores — Configure decimal-point precision for all quality scores to ensure fair compensation, accurate KPI tracking, and complete transparency in agent performance evaluations.

Deeper First Call Resolution Analysis — Isolate the original "root call" in a repeat-contact chain to understand exactly why customers are calling back and drive targeted improvements to FCR.

QA Scoring Coverage Metrics — Track how many conversations are scored per agent, with a clear breakdown between human and automated reviews, to ensure comprehensive and balanced QA coverage.

Conversation Balance Analytics — Identify one-sided conversations by tracking the longest monologue for both agents and customers, creating new coaching opportunities to improve engagement.

Highlights

Quality Score Precision — Base Compensation and Coaching on Exact Data

Rounding quality scores to whole numbers can hide small but significant performance differences, leading to ambiguity in reporting and unfairness in performance-based compensation.

Ensure fair compensation: When a bonus depends on the difference between a 94.4% and a 94.6% score, you can now see the exact number and reward your team accurately.

Gain reporting accuracy: Track team and organizational quality KPIs with decimal-level precision, eliminating rounding errors in your trend analysis.

Increase agent trust: Provide agents with full transparency into their exact performance, building confidence that evaluations are precise and fair.

  • Before: Scores were displayed as rounded whole numbers (e.g., "98%"), making it impossible to distinguish between an agent who scored 98.1% and one who scored 98.9%.
  • After: Scores are now displayed with configurable decimal precision (e.g., "98.47%"), giving you an exact view of performance.
  • In practice: QA Manager Julia can now confidently determine the winner of a monthly quality bonus by comparing the precise scores of her two top-performing agents, 99.15% and 98.85%.

Get started: Navigate to Analytics > Configuration Variables and set your preferred number of decimal places for "Quality Metrics Decimal Precision."

Root Call Filter — Pinpoint the True Source of Repeat Calls

Analyzing repeat calls often shows you the follow-up conversations, but it's difficult to find and analyze the original interaction that failed to resolve the customer's issue in the first place.

Perform true FCR analysis: By focusing only on the initial calls that resulted in a callback, you can get to the heart of why First Call Resolution is failing.

Deliver targeted coaching: Isolate the specific conversations that failed FCR to provide agents with concrete examples and targeted coaching on resolution techniques.

Drive process improvements: Identify patterns in unresolved root calls to discover and fix broken processes, knowledge gaps, or policy issues that cause repeat contacts.

  • Before: Filtering for repeat calls would show all conversations in the chain, mixing the initial call with the follow-ups.
  • After: You can now apply a "Root calls only" filter to isolate just the first conversation in a repeat call sequence.
  • In practice: Director Mark filters for "Root calls only" with the category "Product Inquiry" to understand why a new product launch is generating so many callbacks, allowing him to update agent training guides.

Get started: In the Conversations view, use the "Previous Call Interval" filter and check the new "Root calls only" option.

Scoring Coverage Metrics — Track Your QA Program's Reach

It's challenging to know if your QA program is providing adequate review coverage across all agents, especially when balancing manual reviews with automated scoring.

Visualize QA coverage instantly: New columns in the Agent Metrics table show you exactly what percentage of each agent's conversations have been scored.

Balance human and automated scoring: A dedicated breakdown for "Scored (Human)" and "Scored (Automated)" helps you manage your AI adoption and ensure human reviewers are focused where they're needed most.

Identify at-risk agents: Quickly spot agents with low scoring coverage who may be flying under the radar and missing out on valuable coaching and feedback.

  • Before: You had to manually cross-reference reports to estimate how many calls were scored for each agent.
  • After: The Agent Metrics table now includes "Scored (Human)" and "Scored (Automated)" columns with clear bar visualizations for at-a-glance comparison.
  • In practice: Team Lead Sarah notices in the Agent Metrics table that a new hire has very low "Scored (Human)" coverage, so she assigns her dedicated QA analyst to review more of that agent's calls this week.

Get started: Go to the Agent Metrics table, click the column configuration icon, and enable the "Scored (Human)" and "Scored (Automated)" columns.

Longest Monologue Tracking — Improve Conversation Balance and Engagement

Without listening to every call, it's hard to quantitatively identify when an agent is dominating the conversation or when a customer is rambling, both of which are critical coaching opportunities.

Identify one-sided conversations: A new metric automatically calculates the longest continuous speaking turn for both the agent and the customer in every call.

Coach on engagement techniques: Filter for calls where agents have long monologues to coach them on asking more questions and creating a better dialogue with customers.

Benchmark conversational dynamics: Compare monologue durations across your team to understand what healthy conversation balance looks like for different call types.

  • Before: Identifying an agent monologue required manually listening to a call and subjectively judging if the agent was talking too much.
  • After: You can see the "Longest Monologue" duration as a metric in your dashboards and filter for conversations where an agent spoke uninterrupted for more than 60 seconds.
  • In practice: Sales Coach David filters for agent monologues over 90 seconds. He reviews these calls with his team to train them on turning a long pitch into an interactive discovery conversation.

Get started: The new metrics are available in the Agent Metrics dashboard and as filters in the Conversations view.

Improvements

Reliable automation for negative filters. Automation rules using "is NOT in category" conditions now trigger correctly after AI categorization is complete. This ensures workflows that escalate unresolved issues or perform other actions based on the absence of a category run without getting stuck.

Customizable colors for conversation duration. For business models where longer calls are better (like sales or complex support), you can now invert the colors on duration charts. Simply uncheck "Short conversations preferred" in Analytics Configuration to make longer call durations display in green.

Fixes

▸ Fixed an issue where the categorization counter for automation rules could become out of sync, ensuring more reliable and accurate rule execution.

Client
Burnice Ondricka

The AI terminology chaos is real. Your "divide and conquer" framework is the clarity we needed.

IconIconIcon
Client
Heanri Dokanai

Finally, a clear way to cut through the AI hype. It's not about the name, but the problem it solves.

IconIconIcon
Arrow
Previous
Next
Arrow