Meet Ender, the AI Quality Assurance Specialist who will join your QA team and help it shine!

What's New — February 2026

What's New — February 2026

This month's updates are focused on turning your contact center data into confident, decisive action. We’ve reimagined how you can analyze your conversations with a powerful new AI chat experience, and introduced new metrics to show you exactly how much you can trust your automated scores. These new capabilities allow you to move faster, coach smarter, and build quality programs on a foundation of verifiable data.

Executive Summary

Reimagined AI Chat — Ask complex analytical questions about your call center data in plain language and get back researched, data-driven answers from a completely rebuilt AI engine.

Scorecard Accuracy Metrics — Instantly see how well your AI-powered scorecards align with human reviewers, with new accuracy percentages built directly into scorecard lists and setup pages.

Automatic New Customer Identification — Automatically detect and tag first-time customers across all channels, allowing you to filter, report, and run targeted QA on these critical first-impression interactions.

Interactive Dashboards — Drill down from aggregate scores in your Agent Metrics dashboard directly to the underlying conversations with a single click, dramatically speeding up investigations.

Scoring Dynamics Table — Track how your team’s quality scores change over time with a new interactive table that visualizes performance trends for every evaluation point.

Highlights

Reimagined AI Chat — Get Researched Answers to Your Toughest Questions

Simply asking for insights wasn't enough. You needed an AI partner that could perform real analysis, show its work, and collaborate with your entire team.

Get data-driven answers: The new Ender Chat uses a multi-step research process. It assesses your question, plans its research, queries your data, and synthesizes its findings to deliver more accurate and reliable answers.

Collaborate and learn: All AI chat sessions are now visible to your entire team. Discover insights your colleagues have already found, learn from their lines of inquiry, and build on each other's work.

Trust the results: Get full transparency into how the AI arrived at its conclusions with a complete audit trail of its research steps. You can also regenerate answers and provide feedback to continually improve response quality.

  • Before: AI chat gave single-pass answers, and each user could only see their own chat history.
  • After: The AI now follows a structured research pipeline, and all chat sessions are shared across your organization for team-wide visibility.
  • In practice: A QA Manager asks, "What are the most common reasons for customer complaints this month?" and receives a detailed, synthesized answer with supporting data, which they then share with a team lead to inform coaching.

Get started by navigating to the Ender Chat section. Start a new chat by typing your question.

Scorecard Accuracy Metrics — Know Which AI Scores You Can Trust

Automated scoring is a huge time-saver, but its value depends on your confidence in its accuracy. Previously, it was difficult to know how well an AI scorecard's evaluations aligned with your human QA team's judgments.

Assess reliability at a glance: A new accuracy percentage is now displayed directly on the Scorecards list, showing you how closely each automated scorecard matches human evaluations.

Pinpoint what needs tuning: Drill into any scorecard's setup page to see a detailed, per-point accuracy breakdown. Instantly identify which criteria are performing well and which need to be refined for better results.

Make data-driven decisions: Use accuracy data to decide where automation is trustworthy enough for hands-off QA and where human review is still essential, ensuring your quality program is both efficient and effective.

  • Before: You had to manually compare AI-scored and human-scored conversations to guess at a scorecard's accuracy.
  • After: You can see a clear accuracy percentage for every scorecard in the main list and a detailed per-point breakdown inside each one.
  • In practice: A QA Manager scans the scorecards list and sees that the "Compliance" scorecard has 95% accuracy, but the "Customer Sentiment" scorecard is at 70%. They click in and refine the low-performing criteria to improve its alignment.

Get started by navigating to the Scorecards list to see accuracy percentages. Click into any scorecard to view the detailed per-point accuracy breakdown.

Automatic New Customer Identification — Monitor First-Impression Quality

Understanding how your team handles first-time customer interactions is critical, but identifying these conversations was a manual, time-consuming process.

Automate first-contact discovery: The system now automatically detects when a customer is contacting you for the first time and applies a "New Customer" CRM status to their conversation.

Filter and report on new customers: Use the "New Customer" status in filters, reports, and dashboards to easily track new customer volume and analyze their interactions separately.

Improve customer data quality: Smarter, prefix-aware phone number recognition unifies customer identities across different number formats (e.g., +1, 001, local), reducing duplicate customer records.

  • Before: Identifying a first-time customer required manual tagging or cross-referencing with an external CRM.
  • After: Conversations with first-time customers are automatically tagged with the "New Customer" CRM status, ready for filtering and analysis.
  • In practice: A Team Lead creates a saved filter for all conversations with the "New Customer" status to regularly review how their agents handle initial onboarding and welcome scripts.

This feature works automatically. The "New Customer" status is now available in all CRM status filters throughout the platform.

Interactive Dashboards — Go from Metric to Conversation in One Click

High-level metrics on a dashboard are useful, but their real power comes from being able to investigate the "why" behind the numbers. This previously required navigating away and manually building filters to find the relevant conversations.

Instantly drill down: All score columns (AQS, IQS, average score) in the Agent Metrics dashboard are now clickable. Click any score to jump directly to a pre-filtered list of the conversations that make up that number.

Accelerate investigations: Stop wasting time recreating filters. Go from seeing a low score on your dashboard to reviewing the exact conversations that caused it in a single, seamless action.

Analyze across all scorecards: When filtering manually, you can now use the "Any Scorecard" option to find conversations that meet a score threshold without being limited to a single, specific scorecard.

  • Before: Score columns in the Agent Metrics table were static numbers. Investigating a score required you to navigate to the Conversations view and build filters from scratch.
  • After: Score columns are now interactive links that take you directly to the specific conversations behind that metric, with all filters pre-applied.
  • In practice: A QA Manager notices an agent's AQS dropped this week. They click the score on the dashboard and are immediately taken to that agent's auto-scored conversations to identify the issue.

Get started by navigating to Dashboard > Agent Metrics. Click any value in the AQS, IQS, or average score columns to jump to the matching conversations.

Scoring Dynamics Table — Track Quality Trends Over Time

To understand if your coaching and process changes are working, you need to see how quality scores are trending. Previously, this meant exporting data and building reports manually in spreadsheets.

Visualize performance trends: A new Scoring Dynamics table on your main dashboard shows average scores for each evaluation point, broken down by week or month, so you can see trends at a glance.

Identify changing criteria: Quickly spot which evaluation criteria are improving and which are declining across your team, helping you make data-driven decisions about where to focus training.

Compare periods instantly: The interactive table lets you compare performance across time periods directly within the platform, eliminating the need for manual calculations or spreadsheet analysis.

  • Before: Analyzing score trends over time required you to export data to a spreadsheet for manual comparison.
  • After: The new Scoring Dynamics table provides an instant, period-over-period comparison of all your scorecard points directly on your dashboard.
  • In practice: During a weekly meeting, a Team Lead reviews the Scoring Dynamics table and sees that scores for "Proper Greeting" are improving, but scores for "Knowledge Check" have declined, highlighting an immediate coaching priority.

Get started by visiting your main Dashboard. The new Scoring Dynamics table is available for you to select a scorecard and view trends.

Improvements

Improved Score Over Time Reports — Score Over Time reports are now more accurate and insightful. Critical scorecard points are now clearly marked as "CRITICAL" for better visibility, and a fix ensures that historical data for all past periods now displays correctly instead of showing "N/A".

Enhanced Tagging with Emojis and Longer Phrases — You have more flexibility when creating tags. You can now use emojis (e.g., "⚠️ Complaint") for visual categorization, write longer tag phrases up to 15 words for more detail, and trust that tags with special characters will match conversations reliably.

More Reliable Automation Triggers — Automations that use negative filters (e.g., trigger "when conversation is NOT in category X") now fire reliably. A fix resolves an issue where these workflows could get stuck, ensuring your automated processes run as expected.

Cleaner Data Ingestion — Your analytics are now more accurate thanks to improved data ingestion. The system now automatically prevents duplicate conversation records from being created if the same audio file is uploaded multiple times, ensuring your reports reflect true conversation counts.

Fixes

▸ Scorecard setup pages now have cleaner, consistently aligned columns for easier reading.

▸ Automation links for conversations filtered by scorecard criteria now direct you to the correct results.

▸ You can now drag and drop to reorder columns in the Agent Metrics table.

Client
Burnice Ondricka

The AI terminology chaos is real. Your "divide and conquer" framework is the clarity we needed.

IconIconIcon
Client
Heanri Dokanai

Finally, a clear way to cut through the AI hype. It's not about the name, but the problem it solves.

IconIconIcon
Arrow
Previous
Next
Arrow