MRT: Learning Compact Representations with Mixed RWKV-Transformer for Extreme Image Compression

Authors

  • Han Liu Harbin Institute of Technology
  • Hengyu Man Harbin Institute of Technology
  • Xingtao Wang Harbin Institute of Technology
  • Wenrui Li Harbin Institute of Technology
  • Debin Zhao Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v40i9.37650

Abstract

Recent advances in extreme image compression have revealed that mapping pixel data into highly compact latent representations can significantly improve coding efficiency. However, most existing methods compress images into 2-D latent spaces via convolutional neural networks (CNNs) or Swin Transformers, which tend to retain substantial spatial redundancy, thereby limiting overall compression performance. In this paper, we propose a novel Mixed RWKV-Transformer (MRT) architecture that encodes images into more compact 1-D latent representations by synergistically integrating the complementary strengths of linear-attention-based RWKV and self-attention-based Transformer models. Specifically, MRT partitions each image into fixed-size windows, utilizing RWKV modules to capture global dependencies across windows and Transformer blocks to model local redundancies within each window. The hierarchical attention mechanism enables more efficient and compact representation learning in the 1-D domain. To further enhance compression efficiency, we introduce a dedicated RWKV Compression Model (RCM) tailored to the structure characteristics of the intermediate 1-D latent features in MRT. Extensive experiments on standard image compression benchmarks validate the effectiveness of our approach. The proposed MRT framework consistently achieves superior reconstruction quality at bitrates below 0.02 bits per pixel (bpp). Quantitative results based on the DISTS metric show that MRT significantly outperforms the state-of-the-art 2-D architecture GLC, achieving bitrate savings of 43.75%, 30.59% on the Kodak and CLIC2020 test datasets, respectively.

Downloads

Published

2026-03-14

How to Cite

Liu, H., Man, H., Wang, X., Li, W., & Zhao, D. (2026). MRT: Learning Compact Representations with Mixed RWKV-Transformer for Extreme Image Compression. Proceedings of the AAAI Conference on Artificial Intelligence, 40(9), 7141–7149. https://doi.org/10.1609/aaai.v40i9.37650

Issue

Section

AAAI Technical Track on Computer Vision VI