Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection and Slot Filling

Authors

  • Jixuan Wang University of Toronto Vector Institute
  • Kai Wei Amazon
  • Martin Radfar Amazon
  • Weiwei Zhang Amazon
  • Clement Chung Amazon

DOI:

https://doi.org/10.1609/aaai.v35i16.17642

Keywords:

Conversational AI/Dialog Systems

Abstract

We propose a novel Transformer encoder-based architecture with syntactical knowledge encoded for intent detection and slot filling. Specifically, we encode syntactic knowledge into the Transformer encoder by jointly training it to predict syntactic parse ancestors and part-of-speech of each token via multi-task learning. Our model is based on self-attention and feed-forward layers and does not require external syntactic information to be available at inference time. Experiments show that on two benchmark datasets, our models with only two Transformer encoder layers achieve state-of-the-art results. Compared to the previously best performed model without pre-training, our models achieve absolute F1 score and accuracy improvement of 1.59 % and 0.85 % for slot filling and intent detection on the SNIPS dataset, respectively. Our models also achieve absolute F1 score and accuracy improvement of 0.1 % and 0.34 % for slot filling and intent detection on the ATIS dataset, respectively, over the previously best performed model. Furthermore, the visualization of the self-attention weights illustrates the benefits of incorporating syntactic information during training.

Downloads

Published

2021-05-18

How to Cite

Wang, J. ., Wei, K., Radfar, M., Zhang, W., & Chung, C. (2021). Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection and Slot Filling. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 13943-13951. https://doi.org/10.1609/aaai.v35i16.17642

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III