Not All Attention Is Needed: Gated Attention Network for Sequence Data

Authors

  • Lanqing Xue HKUST
  • Xiaopeng Li Amazon
  • Nevin L. Zhang HKUST

DOI:

https://doi.org/10.1609/aaai.v34i04.6129

Abstract

Although deep neural networks generally have fixed network structures, the concept of dynamic mechanism has drawn more and more attention in recent years. Attention mechanisms compute input-dependent dynamic attention weights for aggregating a sequence of hidden states. Dynamic network configuration in convolutional neural networks (CNNs) selectively activates only part of the network at a time for different inputs. In this paper, we combine the two dynamic mechanisms for text classification tasks. Traditional attention mechanisms attend to the whole sequence of hidden states for an input sentence, while in most cases not all attention is needed especially for long sequences. We propose a novel method called Gated Attention Network (GA-Net) to dynamically select a subset of elements to attend to using an auxiliary network, and compute attention weights to aggregate the selected elements. It avoids a significant amount of unnecessary computation on unattended elements, and allows the model to pay attention to important parts of the sequence. Experiments in various datasets show that the proposed method achieves better performance compared with all baseline models with global or local attention while requiring less computation and achieving better interpretability. It is also promising to extend the idea to more complex attention-based models, such as transformers and seq-to-seq models.

Downloads

Published

2020-04-03

How to Cite

Xue, L., Li, X., & Zhang, N. L. (2020). Not All Attention Is Needed: Gated Attention Network for Sequence Data. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6550-6557. https://doi.org/10.1609/aaai.v34i04.6129

Issue

Section

AAAI Technical Track: Machine Learning