How Central and Eastern Europe's largest banking group rebuilt its quality programme on Ender Turing — covering eight languages, eliminating QA backlog, and surfacing compliance risks in hours instead of weeks.
OTP Bank's contact-center quality team was reviewing roughly 3% of calls each month by hand — barely enough to spot recurring issues, and far too late to coach agents while a problem was still fresh.
The team had three structural problems to solve in parallel: scoring at scale, multi-language consistency, and getting actionable signal back to team leads in days, not weeks.
QA reviewed ~3% of monthly call volume. Recurring compliance gaps and misselling patterns surfaced only during quarterly audits.
Each subsidiary kept its own scorecard in Excel. Cross-country benchmarking was effectively impossible for senior leadership.
Average lag between a problem call and the coaching session was 11–14 days. By the time an agent heard the feedback, the moment was gone.
Mandatory disclosures (cooling-off rights, APR statements, data-processing consent) were only verified post-hoc — exposing the bank to regulator findings.
Every inbound and outbound call transcribed end-to-end with speaker separation, diarization, and entity extraction (account numbers, product names, regulatory phrases).
One template scoring 18 criteria across greeting, identity verification, problem framing, solution offered, empathy, compliance disclosures, and closing — applied consistently across all subsidiaries.
Custom rules trigger flags within minutes of call end when mandatory disclosures are missed, sensitive topics arise, or risk language appears.
Localized dashboards for each market plus a group-level rollup. Team leads see queue health, top issues, and coaching candidates without leaving their workflow.
Every agent gets weekly auto-scored calls in their own language with playback timestamps for each evaluation criterion — making feedback specific, not abstract.
Working with OTP Bank's central operations team, Ender Turing replaced the spreadsheet-based QA process with a single platform connected to Genesys across all subsidiaries and the bank's in-house CRM.
The rollout focused on parity over reinvention: instead of replacing the existing 18-criterion scorecard, Ender Turing's QM module was configured to score that exact rubric automatically — letting team leads keep their existing language while getting coverage across 100% of calls.
For the markets where the bank operates regulated lending (Hungary, Romania, Bulgaria), real-time disclosure detection was built on top, surfacing compliance gaps within minutes of a call ending rather than during a quarterly audit cycle.
OTP Bank ran the rollout in four phases. Each phase ended with a measurable gate — no phase started until the previous one was signed off by the operations and compliance teams.
Connected Genesys SIPREC streams across three subsidiaries. Mapped CRM metadata. Verified language coverage on a 50,000-call replay set.
Translated the bank's existing 18-criterion rubric into ML-trained classifiers. Calibrated against 1,200 manually-scored calls until inter-rater agreement hit 92%.
320 agents on the retail line moved to fully-automated scoring. Daily standups with QA leads. Compliance-rule library expanded to 47 patterns.
Remaining six subsidiaries onboarded in 4 staggered waves. Group-level dashboard delivered to executive sponsors. First quarterly review held in week 14.
All figures measured against the pre-deployment baseline. Methodology and per-market breakdowns available on request.
"With Ender Turing, we moved from sampling a few hundred calls per month to analyzing every single conversation. Our coaches now spend their time on what actually matters — helping agents grow, not searching for problems. The shift in how we run quality has been one of the biggest operational wins of the past year."
Beyond the headline numbers, the day-to-day shape of the QA function changed materially. Manual call scoring used to consume roughly 70% of QA-specialist time. Today, that share is closer to 8% — limited to dispute reviews and edge-case investigations.
The freed capacity went into three places: structured coaching sessions doubled month-over-month; thematic deep-dives on emerging customer issues became a weekly cadence; and the team built a shared library of "winning calls" that new agents now reference during onboarding.
For the first time, the bank has a single quality dashboard that the COO opens every Monday morning — with comparable scores across Budapest, Bucharest, Sofia, Belgrade, and the rest of the group.
Other regulated, multi-language customer-experience deployments worth a read.
30-minute session with one of our deployment leads. We'll map your current QA process and send back a written outline of what's realistic in your first 90 days, based on customers running similar setups.