TY - JOUR AU - Ponnusamy, Pragaash AU - Ghias, Alireza AU - Yi, Yi AU - Yao, Benjamin AU - Guo, Chenlei AU - Sarikaya, Ruhi PY - 2022/01/18 Y2 - 2024/03/29 TI - Feedback-Based Self-Learning in Large-Scale Conversational AI Agents JF - AI Magazine JA - AIMag VL - 42 IS - 4 SE - Special Topic Articles DO - 10.1609/aaai.12025 UR - https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/15102 SP - 43-56 AB - <p>Today, most of the large-scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time con-suming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER, or the application. In some cases, users may not properly formulate their requests (e.g., providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generates reformulations, and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns, and coupling it with a guardrail rewrite selection mechanism that reactively evaluates these fixes using feedback friction data. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win-loss ratio of 11.8 and effectively reduces the defect rate by more than 30 percent on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.</p> ER -