Few-shot Learning for Multi-label Intent Detection

Authors

  • Yutai Hou Harbin Institute of Technology
  • Yongkui Lai Harbin Institute of Technology
  • Yushan Wu Harbin Institute of Technology
  • Wanxiang Che Harbin Institute of Technology
  • Ting Liu Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i14.17541

Keywords:

Conversational AI/Dialog Systems

Abstract

In this paper, we study the few-shot multi-label classification for user intent detection. For multi-label intent detection, state-of-the-art work estimates label-instance relevance scores and uses a threshold to select multiple associated intent labels. To determine appropriate thresholds with only a few examples, we first learn universal thresholding experience on data-rich domains, and then adapt the thresholds to certain few-shot domains with a calibration based on nonparametric learning. For better calculation of label-instance relevance score, we introduce label name embedding as anchor points in representation space, which refines representations of different classes to be well-separated from each other. Experiments on two datasets show that the proposed model significantly outperforms strong baselines in both one-shot and five-shot settings.

Downloads

Published

2021-05-18

How to Cite

Hou, Y., Lai, Y., Wu, Y., Che, W., & Liu, T. (2021). Few-shot Learning for Multi-label Intent Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 13036-13044. https://doi.org/10.1609/aaai.v35i14.17541

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I