Block-Skim: Efficient Question Answering for Transformer

Authors

  • Yue Guan Shanghai Jiaotong University Shanghai Qi Zhi Institute
  • Zhengyi Li Shanghai Jiao Tong University Shanghai Qi Zhi Institute
  • Zhouhan Lin Shanghai Jiao Tong University
  • Yuhao Zhu University of Rochester
  • Jingwen Leng Shanghai Jiao Tong University Shanghai Qi Zhi Institute
  • Minyi Guo Shanghai Jiaotong University Shanghai Qi Zhi Institute

DOI:

https://doi.org/10.1609/aaai.v36i10.21316

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

Transformer models have achieved promising results on natural language processing (NLP) tasks including extractive question answering (QA). Common Transformer encoders used in NLP tasks process the hidden states of all input tokens in the context paragraph throughout all layers. However, different from other tasks such as sequence classification, answering the raised question does not necessarily need all the tokens in the context paragraph. Following this motivation, we propose Block-skim, which learns to skim unnecessary context in higher hidden layers to improve and accelerate the Transformer performance. The key idea of Block-Skim is to identify the context that must be further processed and those that could be safely discarded early on during inference. Critically, we find that such information could be sufficiently derived from the self-attention weights inside the Transformer model. We further prune the hidden states corresponding to the unnecessary positions early in lower layers, achieving significant inference-time speedup. To our surprise, we observe that models pruned in this way outperform their full-size counterparts. Block-Skim improves QA models' accuracy on different datasets and achieves 3 times speedup on BERT-base model.

Downloads

Published

2022-06-28

How to Cite

Guan, Y., Li, Z., Lin, Z., Zhu, Y., Leng, J., & Guo, M. (2022). Block-Skim: Efficient Question Answering for Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10710-10719. https://doi.org/10.1609/aaai.v36i10.21316

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing