ObserveAI
Context
Picture a bustling Contact Center where customer interactions flow through multiple channels like calls, emails, live chat, etc. Here every interaction counts, as it shapes the reputation of a business.
Amid this chaos, Quality Assurance (QA) teams play a crucial role in ensuring that customers consistently receive exceptional experiences. However, outdated tools and fragmented workflows often hinder their efforts in keeping track of quality, making it a challenging task.
Read on, for a peek at how we tackled some of these challenges head-on with the introduction of ”QA Evaluations” on ObserveAI 👇
Problem
QA teams at contact centers were faced with some serious challenges: too many tools, too little time, and too few calls being evaluated. This meant that the agents weren't getting the feedback they needed to improve, and businesses were missing out on key opportunities to raise the bar.
- Fragmented Tools: QA teams grapple with 5-7 disparate tools for evaluations, consuming excessive time and resources, also error prone.
- Limited Sampling: Only 2% of interactions are evaluated, resulting in subjective feedback and incomplete assessments of agent performance.
- Lack of Actionable Feedback : Without comprehensive data, QA teams struggle to identify improvement areas and implement targeted coaching.
Solution
QA Evaluations — An intuitive, efficient and accurate way for QA teams to evaluate an interaction and asses agent performance with Observe.AI. Through meticulous development and testing, we reimagined call selection , refined evaluation processes, and streamlined feedback mechanisms.
Key Features:
- Unified Interface: consolidates evaluation tools into one user-friendly interface, streamlining workflows and saving time.
- Actionable Feedback: Time-stamped transcripts and AI powered insights enable targeted feedback, fostering faster agent improvement.
- Efficiency Boost: By simplifying evaluations, this feature enhances efficiency, ensuring timely feedback and consistent service quality.
- Data-driven Insights: Comprehensive performance data, empowerers QA teams with actionable insights for continuous improvement.
My Role
Building the foundations of the QA capability on ObserveAI
I spearheaded the design for Quality Assurance (QA), one of the core functions of the ObserveAI platform. I took the lead in conceptualizing, crafting, and delivering high-impact features to elevate quality teams' performance in contact centers.
Among these, the introduction of QA Evaluations emerged as a foundational and, arguably, the most critical feature. It paved the way for the establishment of robust quality workflows while also allowing us to explore automation and develop early AI experiments.
Project Timeline Feb 2020 - Apr 2021 Role Product Designer Team 2 PMs, 6 Devs
Impact
The impact was profound. The once elusive 2% expanded to encompass a comprehensive evaluation landscape. Agents received targeted feedback, leading to enhanced performance and improved customer satisfaction.
“We no longer need to navigate multiple tools to monitor and evaluate calls - it’s a one-stop shop.”
— Melquin Troncoso Global QA Director, ERC BPO
Boosted evaluation procedure by 40% enabling faster feedback loops that enable faster new hire onboarding efforts.
— City Experiences by Hornblower : one of the largest river and harbour cruising companies in the US