Towards Multi-Intent Spoken Language Understanding via Hierarchical Attention and Optimal Transport

Authors

  • Xuxin Cheng Peking University
  • Zhihong Zhu Peking University
  • Hongxiang Li Peking University
  • Yaowei Li Peking University
  • Xianwei Zhuang Peking University
  • Yuexian Zou Peking University

DOI:

https://doi.org/10.1609/aaai.v38i16.29738

Keywords:

NLP: Conversational AI/Dialog Systems

Abstract

Multi-Intent spoken language understanding (SLU) can handle complicated utterances expressing multiple intents, which has attracted increasing attention from researchers. Although existing models have achieved promising performance, most of them still suffer from two leading problems: (1) each intent has its specific scope and the semantic information outside the scope might potentially hinder accurate predictions, i.e. scope barrier; (2) only the guidance from intent to slot is modeled but the guidance from slot to intent is often neglected, i.e. unidirectional guidance. In this paper, we propose a novel Multi-Intent SLU framework termed HAOT, which utilizes hierarchical attention to divide the scopes of each intent and applies optimal transport to achieve the mutual guidance between slot and intent. Experiments demonstrate that our model achieves state-of-the-art performance on two public Multi-Intent SLU datasets, obtaining the 3.4 improvement on MixATIS dataset compared to the previous best models in overall accuracy.

Published

2024-03-24

How to Cite

Cheng, X., Zhu, Z., Li, H., Li, Y., Zhuang, X., & Zou, Y. (2024). Towards Multi-Intent Spoken Language Understanding via Hierarchical Attention and Optimal Transport. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17844-17852. https://doi.org/10.1609/aaai.v38i16.29738

Issue

Section

AAAI Technical Track on Natural Language Processing I