Towards End-to-End Image Compression and Analysis with Transformers
Keywords:Computer Vision (CV)
AbstractWe propose an end-to-end image compression and analysis model with Transformers, targeting to the cloud-based image classification application. Instead of placing an existing Transformer-based image classification model directly after an image codec, we aim to redesign the Vision Transformer (ViT) model to perform image classification from the compressed features and facilitate image compression with the long-term information from the Transformer. Specifically, we first replace the patchify stem (i.e., image splitting and embedding) of the ViT model with a lightweight image encoder modelled by a convolutional neural network. The compressed features generated by the image encoder are injected convolutional inductive bias and are fed to the Transformer for image classification bypassing image reconstruction. Meanwhile, we propose a feature aggregation module to fuse the compressed features with the selected intermediate features of the Transformer, and feed the aggregated features to a deconvolutional neural network for image reconstruction. The aggregated features can obtain the long-term information from the self-attention mechanism of the Transformer and improve the compression performance. The rate-distortion-accuracy optimization problem is finally solved by a two-step training strategy. Experimental results demonstrate the effectiveness of the proposed model in both the image compression and the classification tasks.
How to Cite
Bai, Y., Yang, X., Liu, X., Jiang, J., Wang, Y., Ji, X., & Gao, W. (2022). Towards End-to-End Image Compression and Analysis with Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 104-112. https://doi.org/10.1609/aaai.v36i1.19884
AAAI Technical Track on Computer Vision I