The Heads Hypothesis: A Unifying Statistical Approach Towards Understanding Multi-Headed Attention in BERT

Authors

  • Madhura Pande Department of Computer Science and Engineering, IIT Madras, India Robert Bosch Center for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras, India
  • Aakriti Budhraja Department of Computer Science and Engineering, IIT Madras, India Robert Bosch Center for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras, India
  • Preksha Nema Department of Computer Science and Engineering, IIT Madras, India Robert Bosch Center for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras, India
  • Pratyush Kumar Department of Computer Science and Engineering, IIT Madras, India Robert Bosch Center for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras, India
  • Mitesh M. Khapra Department of Computer Science and Engineering, IIT Madras, India Robert Bosch Center for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras, India

DOI:

https://doi.org/10.1609/aaai.v35i15.17605

Keywords:

Interpretaility & Analysis of NLP Models

Abstract

Multi-headed attention heads are a mainstay in transformer-based models. Different methods have been proposed to classify the role of each attention head based on the relations between tokens which have high pair-wise attention. These roles include syntactic (tokens with some syntactic relation), local (nearby tokens), block (tokens in the same sentence) and delimiter (the special [CLS], [SEP] tokens). There are two main challenges with existing methods for classification: (a) there are no standard scores across studies or across functional roles, and (b) these scores are often average quantities measured across sentences without capturing statistical significance. In this work, we formalize a simple yet effective score that generalizes to all the roles of attention heads and employs hypothesis testing on this score for robust inference. This provides us the right lens to systematically analyze attention heads and confidently comment on many commonly posed questions on analyzing the BERT model. In particular, we comment on the co-location of multiple functional roles in the same attention head, the distribution of attention heads across layers, and effect of fine-tuning for specific NLP tasks on these functional roles. The code is made publicly available at https://github.com/iitmnlp/heads-hypothesis

Downloads

Published

2021-05-18

How to Cite

Pande, M., Budhraja, A., Nema, P., Kumar, P., & Khapra, M. M. (2021). The Heads Hypothesis: A Unifying Statistical Approach Towards Understanding Multi-Headed Attention in BERT. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13613-13621. https://doi.org/10.1609/aaai.v35i15.17605

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II