Automated Conversation Review to Surface Virtual Assistant Misunderstandings: Reducing Cost and Increasing Privacy
DOI:
https://doi.org/10.1609/aaai.v34i08.7017Abstract
With the rise of Intelligent Virtual Assistants (IVAs), there is a necessary rise in human effort to identify conversations containing misunderstood user inputs. These conversations uncover error in natural language understanding and help prioritize and expedite improvements to the IVA. As human reviewer time is valuable and manual analysis is time consuming, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds improvement. In addition, less conversations reviewed by humans mean less user data is exposed, increasing privacy. We present a scalable system for automated conversation review that can identify potential miscommunications. Our system provides IVA designers with suggested actions to fix errors in IVA understanding, prioritizes areas of language model repair, and automates the review of conversations where desired.
Verint - Next IT builds IVAs on behalf of other companies and organizations, and therefore analyzes large volumes of conversational data. Our review system has been in production for over three years and saves our company roughly $1.5 million in annotation costs yearly, as well as shortened the refinement cycle of production IVAs. In this paper, the system design is discussed and performance in identifying errors in IVA understanding is compared to that of human reviewers.