The Inhibitor: ReLU and Addition-Based Attention for Efficient Transformers (Student Abstract)


  • Rickard Brännvall Computer Science Department, RISE Research Institutes of Sweden Machine Learning Group, Luleå University of Technology, Sweden



AI Architectures, Computational Sustainability, Knowledge Representation, Machine Learning


To enhance the computational efficiency of quantized Transformers, we replace the dot-product and Softmax-based attention with an alternative mechanism involving addition and ReLU activation only. This side-steps the expansion to double precision often required by matrix multiplication and avoids costly Softmax evaluations but maintains much of the core functionality of conventional dot-product attention. It can enable more efficient execution and support larger quantized Transformer models on resource-constrained hardware or alternative arithmetic systems like homomorphic encryption. Training experiments on four common benchmark tasks show test set prediction scores comparable to those of conventional Transformers with dot-product attention. Our scaling experiments also suggest significant computational savings, both in plaintext and under encryption. In particular, we believe that the ReLU and addition-based attention mechanism introduced in this paper may enable privacy-preserving AI applications operating under homomorphic encryption by avoiding the costly multiplication of encrypted variables.



How to Cite

Brännvall, R. (2024). The Inhibitor: ReLU and Addition-Based Attention for Efficient Transformers (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23445-23446.