Faster Depth-Adaptive Transformers

Authors

  • Yijin Liu Beijing Jiaotong University
  • Fandong Meng Tencent WeChat AI - Pattern Recognition Center Tencent Inc.
  • Jie Zhou Tencent WeChat AI - Pattern Recognition Center Tencent Inc.
  • Yufeng Chen Beijing Jiaotong University
  • Jinan Xu Beijing Jiaotong University

DOI:

https://doi.org/10.1609/aaai.v35i15.17584

Keywords:

Text Classification & Sentiment Analysis

Abstract

Depth-adaptive neural networks can dynamically adjust depths according to the hardness of input words, and thus improve efficiency. The main challenge is how to measure such hardness and decide the required depths (i.e., layers) to conduct. Previous works generally build a halting unit to decide whether the computation should continue or stop at each layer. As there is no specific supervision of depth selection, the halting unit may be under-optimized and inaccurate, which results in suboptimal and unstable performance when modeling sentences. In this paper, we get rid of the halting unit and estimate the required depths in advance, which yields a faster depth-adaptive model. Specifically, two approaches are proposed to explicitly measure the hardness of input words and estimate corresponding adaptive depth, namely 1) mutual information (MI) based estimation and 2) reconstruction loss based estimation. We conduct experiments on the text classification task with 24 datasets in various sizes and domains. Results confirm that our approaches can speed up the vanilla Transformer (up to 7x) while preserving high accuracy. Moreover, efficiency and robustness are significantly improved when compared with other depth-adaptive approaches.

Downloads

Published

2021-05-18

How to Cite

Liu, Y., Meng, F., Zhou, J., Chen, Y., & Xu, J. (2021). Faster Depth-Adaptive Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13424-13432. https://doi.org/10.1609/aaai.v35i15.17584

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II