BVT-IMA: Binary Vision Transformer with Information-Modified Attention
DOI:
https://doi.org/10.1609/aaai.v38i14.29505Keywords:
ML: Learning on the Edge & Model Compression, ML: Classification and Regression, ML: Deep Neural Architectures and Foundation ModelsAbstract
As a compression method that can significantly reduce the cost of calculations and memories, model binarization has been extensively studied in convolutional neural networks. However, the recently popular vision transformer models pose new challenges to such a technique, in which the binarized models suffer from serious performance drops. In this paper, an attention shifting is observed in the binary multi-head self-attention module, which can influence the information fusion between tokens and thus hurts the model performance. From the perspective of information theory, we find a correlation between attention scores and the information quantity, further indicating that a reason for such a phenomenon may be the loss of the information quantity induced by constant moduli of binarized tokens. Finally, we reveal the information quantity hidden in the attention maps of binary vision transformers and propose a simple approach to modify the attention values with look-up information tables so that improve the model performance. Extensive experiments on CIFAR-100/TinyImageNet/ImageNet-1k demonstrate the effectiveness of the proposed information-modified attention on binary vision transformers.Downloads
Published
2024-03-24
How to Cite
Wang, Z., Luo, H., Xie, X., Wang, F., & Shi, G. (2024). BVT-IMA: Binary Vision Transformer with Information-Modified Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15761-15769. https://doi.org/10.1609/aaai.v38i14.29505
Issue
Section
AAAI Technical Track on Machine Learning V