TY - JOUR AU - Wu, Junshuang AU - Zhang, Richong AU - Mao, Yongyi AU - Chen, Junfan PY - 2021/05/18 Y2 - 2024/03/28 TI - On Scalar Embedding of Relative Positions in Attention Models JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 16 SE - AAAI Technical Track on Speech and Natural Language Processing III DO - 10.1609/aaai.v35i16.17654 UR - https://ojs.aaai.org/index.php/AAAI/article/view/17654 SP - 14050-14057 AB - Attention with positional encoding has been demonstrated as a powerful component in modern neural network models, such as transformers. However, why positional encoding works well in attention models remains largely unanswered. In this paper, we study the scalar relative positional encoding (SRPE) proposed in the T5 transformer. Such an encoding method has two features. First, it uses a scalar to embed relative positions. Second, the relative positions are bucketized using a fixed heuristic algorithm, and positions in the same bucket share the same embedding. In this work, we show that SRPE in attention has an elegant probabilistic interpretation. More specifically, the positional encoding serves to produce a prior distribution for the attended positions. The resulting attentive distribution can be viewed as a posterior distribution of the attended position given the observed input sequence. Furthermore, we propose a new SRPE (AT5) that adopts a learnable bucketization protocol and automatically adapts to the dependency range specific to the learning task. Empirical studies show that the AT5 achieves superior performance than the T5's SRPE. ER -