The Inhibitor: ReLU and Addition-Based Attention for Efficient Transformers (Student Abstract)

Authors

  • Rickard Brännvall Computer Science Department, RISE Research Institutes of Sweden Machine Learning Group, Luleå University of Technology, Sweden

DOI:

https://doi.org/10.1609/aaai.v38i21.30422

Keywords:

AI Architectures, Computational Sustainability, Knowledge Representation, Machine Learning

Abstract

To enhance the computational efficiency of quantized Transformers, we replace the dot-product and Softmax-based attention with an alternative mechanism involving addition and ReLU activation only. This side-steps the expansion to double precision often required by matrix multiplication and avoids costly Softmax evaluations but maintains much of the core functionality of conventional dot-product attention. It can enable more efficient execution and support larger quantized Transformer models on resource-constrained hardware or alternative arithmetic systems like homomorphic encryption. Training experiments on four common benchmark tasks show test set prediction scores comparable to those of conventional Transformers with dot-product attention. Our scaling experiments also suggest significant computational savings, both in plaintext and under encryption. In particular, we believe that the ReLU and addition-based attention mechanism introduced in this paper may enable privacy-preserving AI applications operating under homomorphic encryption by avoiding the costly multiplication of encrypted variables.

Published

2024-03-24

How to Cite

Brännvall, R. (2024). The Inhibitor: ReLU and Addition-Based Attention for Efficient Transformers (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23445-23446. https://doi.org/10.1609/aaai.v38i21.30422