Graph LSTM with Context-Gated Mechanism for Spoken Language Understanding


  • Linhao Zhang Peking University
  • Dehong Ma Peking University
  • Xiaodong Zhang Peking University
  • Xiaohui Yan Huawei Technologies
  • Houfeng Wang Peking University



Much research in recent years has focused on spoken language understanding (SLU), which usually involves two tasks: intent detection and slot filling. Since Yao et al.(2013), almost all SLU systems are RNN-based, which have been shown to suffer various limitations due to their sequential nature. In this paper, we propose to tackle this task with Graph LSTM, which first converts text into a graph and then utilizes the message passing mechanism to learn the node representation. Not only the Graph LSTM addresses the limitations of sequential models, but it can also help to utilize the semantic correlation between slot and intent. We further propose a context-gated mechanism to make better use of context information for slot filling. Our extensive evaluation shows that the proposed model outperforms the state-of-the-art results by a large margin.




How to Cite

Zhang, L., Ma, D., Zhang, X., Yan, X., & Wang, H. (2020). Graph LSTM with Context-Gated Mechanism for Spoken Language Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9539-9546.



AAAI Technical Track: Natural Language Processing