Multi-Point Semantic Representation for Intent Classification

Authors

  • Jinghan Zhang Nankai University
  • Yuxiao Ye University of Cambridge
  • Yue Zhang Westlake University
  • Likun Qiu Alibaba Group
  • Bin Fu Alibaba Group
  • Yang Li Alibaba Group
  • Zhenglu Yang Nankai University
  • Jian Sun Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v34i05.6498

Abstract

Detecting user intents from utterances is the basis of natural language understanding (NLU) task. To understand the meaning of utterances, some work focuses on fully representing utterances via semantic parsing in which annotation cost is labor-intentsive. While some researchers simply view this as intent classification or frequently asked questions (FAQs) retrieval, they do not leverage the shared utterances among different intents. We propose a simple and novel multi-point semantic representation framework with relatively low annotation cost to leverage the fine-grained factor information, decomposing queries into four factors, i.e., topic, predicate, object/condition, query type. Besides, we propose a compositional intent bi-attention model under multi-task learning with three kinds of attention mechanisms among queries, labels and factors, which jointly combines coarse-grained intent and fine-grained factor information. Extensive experiments show that our framework and model significantly outperform several state-of-the-art approaches with an improvement of 1.35%-2.47% in terms of accuracy.

Downloads

Published

2020-04-03

How to Cite

Zhang, J., Ye, Y., Zhang, Y., Qiu, L., Fu, B., Li, Y., Yang, Z., & Sun, J. (2020). Multi-Point Semantic Representation for Intent Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9531-9538. https://doi.org/10.1609/aaai.v34i05.6498

Issue

Section

AAAI Technical Track: Natural Language Processing