Revisiting Mahalanobis Distance for Transformer-Based Out-of-Domain Detection

Authors

  • Alexander Podolskiy Huawei Noah’s Ark Lab, Moscow, Russia
  • Dmitry Lipin Huawei Noah’s Ark Lab, Moscow, Russia
  • Andrey Bout Huawei Noah’s Ark Lab, Moscow, Russia
  • Ekaterina Artemova Huawei Noah’s Ark Lab, Moscow, Russia HSE University, Moscow, Russia
  • Irina Piontkovskaya Huawei Noah’s Ark Lab, Moscow, Russia

DOI:

https://doi.org/10.1609/aaai.v35i15.17612

Keywords:

Conversational AI/Dialog Systems, Text Classification & Sentiment Analysis, Interpretaility & Analysis of NLP Models, General

Abstract

Real-life applications, heavily relying on machine learning, such as dialog systems, demand for out-of-domain detection methods. Intent classification models should be equipped with a mechanism to distinguish seen intents from unseen ones so that the dialog agent is capable of rejecting the latter and avoiding undesired behavior. However, despite increasing attention paid to the task, the best practices for out-of-domain intent detection have not yet been fully established. This paper conducts a thorough comparison of out-of-domain intent detection methods. We prioritize the methods, not requiring access to out-of-domain data during training, gathering of which is extremely time- and labor-consuming due to lexical and stylistic variation of user utterances. We evaluate multiple contextual encoders and methods, proven to be efficient, on three common datasets for intent classification, expanded with out-of-domain utterances. Our main findings show that fine-tuning Transformer-based encoders on in-domain data leads to superior results. Mahalanobis distance, together with utterance representations, derived from Transformer-based encoders, outperform other methods by a wide margin(1-5% in terms of AUROC) and establish new state-of-the-art results for all datasets. The broader analysis shows that the reason for success lies in the fact that the fine-tuned Transformer is capable of constructing homogeneous representations of in-domain utterances, revealing geometrical disparity to out of domain utterances. In turn, the Mahalanobis distance captures this disparity easily.

Downloads

Published

2021-05-18

How to Cite

Podolskiy, A., Lipin, D., Bout, A., Artemova, E., & Piontkovskaya, I. (2021). Revisiting Mahalanobis Distance for Transformer-Based Out-of-Domain Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13675-13682. https://doi.org/10.1609/aaai.v35i15.17612

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II