On Scalar Embedding of Relative Positions in Attention Models

Authors

  • Junshuang Wu Beihang University
  • Richong Zhang Beihang University
  • Yongyi Mao University of Ottawa
  • Junfan Chen Beihang University

DOI:

https://doi.org/10.1609/aaai.v35i16.17654

Keywords:

Applications

Abstract

Attention with positional encoding has been demonstrated as a powerful component in modern neural network models, such as transformers. However, why positional encoding works well in attention models remains largely unanswered. In this paper, we study the scalar relative positional encoding (SRPE) proposed in the T5 transformer. Such an encoding method has two features. First, it uses a scalar to embed relative positions. Second, the relative positions are bucketized using a fixed heuristic algorithm, and positions in the same bucket share the same embedding. In this work, we show that SRPE in attention has an elegant probabilistic interpretation. More specifically, the positional encoding serves to produce a prior distribution for the attended positions. The resulting attentive distribution can be viewed as a posterior distribution of the attended position given the observed input sequence. Furthermore, we propose a new SRPE (AT5) that adopts a learnable bucketization protocol and automatically adapts to the dependency range specific to the learning task. Empirical studies show that the AT5 achieves superior performance than the T5's SRPE.

Downloads

Published

2021-05-18

How to Cite

Wu, J., Zhang, R., Mao, Y., & Chen, J. (2021). On Scalar Embedding of Relative Positions in Attention Models. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14050-14057. https://doi.org/10.1609/aaai.v35i16.17654

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III