Self-Attention Attribution: Interpreting Information Interactions Inside Transformer

Authors

  • Yaru Hao Beihang University Microsoft Research
  • Li Dong Microsoft Research
  • Furu Wei Microsoft Research
  • Ke Xu Beihang University

DOI:

https://doi.org/10.1609/aaai.v35i14.17533

Keywords:

Interpretaility & Analysis of NLP Models

Abstract

The great success of Transformer-based models benefits from the powerful multi-head self-attention mechanism, which learns token dependencies and encodes contextual information from the input. Prior work strives to attribute model decisions to individual input features with different saliency measures, but they fail to explain how these input features interact with each other to reach predictions. In this paper, we propose a self-attention attribution method to interpret the information interactions inside Transformer. We take BERT as an example to conduct extensive studies. Firstly, we apply self-attention attribution to identify the important attention heads, while others can be pruned with marginal performance degradation. Furthermore, we extract the most salient dependencies in each layer to construct an attribution tree, which reveals the hierarchical interactions inside Transformer. Finally, we show that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.

Downloads

Published

2021-05-18

How to Cite

Hao, Y., Dong, L., Wei, F., & Xu, K. (2021). Self-Attention Attribution: Interpreting Information Interactions Inside Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12963-12971. https://doi.org/10.1609/aaai.v35i14.17533

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I