Feedback-Based Self-Learning in Large-Scale Conversational AI Agents

Authors

  • Pragaash Ponnusamy Amazon
  • Alireza Roshan Ghias Amazon
  • Yi Yi Amazon
  • Benjamin Yao Amazon
  • Chenlei Guo Amazon
  • Ruhi Sarikaya Amazon

DOI:

https://doi.org/10.1609/aimag.v42i4.15102

Abstract

Today, most of the large-scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time con-suming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER, or the application. In some cases, users may not properly formulate their requests (e.g., providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generates reformulations, and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns, and coupling it with a guardrail rewrite selection mechanism that reactively evaluates these fixes using feedback friction data. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win-loss ratio of 11.8 and effectively reduces the defect rate by more than 30 percent on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.

Downloads

Published

2022-01-12

How to Cite

Ponnusamy, P. ., Ghias, A. ., Yi, Y., Yao, B. ., Guo, C. ., & Sarikaya, R. . (2022). Feedback-Based Self-Learning in Large-Scale Conversational AI Agents. AI Magazine, 42(4), 43-56. https://doi.org/10.1609/aimag.v42i4.15102

Issue

Section

Special Topic Articles