Less Is More: Vision Representation Compression for Efficient Video Generation with Large Language Models
DOI:
https://doi.org/10.1609/aaai.v40i16.38391Abstract
Video generation using Large Language Models (LLMs) has shown promising potential, effectively leveraging the extensive LLM infrastructure to provide a unified framework for multimodal understanding and content generation. However, these methods face critical challenges, i.e., token redundancy and inefficiencies arising from long sequences, which constrain their performance and efficiency compared to diffusion-based approaches. In this study, we investigate the impact of token redundancy in LLM-based video generation by information-theoretic analysis and propose Vision Representation Compression (VRC), a novel framework designed to achieve more in both performance and efficiency with less video token representations. VRC introduces learnable representation compressor and decompressor to compress video token representations, enabling autoregressive next-sequence prediction in a compact latent space. Our approach reduces redundancy, shortens token sequences, and improves model's ability to capture underlying video structures. Our experiments demonstrate that VRC reduces token sequence lengths by a factor of 4, achieving more than 9~14x acceleration in inference while maintaining performance comparable to state-of-the-art video generation models. VRC not only accelerates the inference but also significantly reduces memory requirements during both model training and inference.Downloads
Published
2026-03-14
How to Cite
Zhou, Y., Zhang, J., Chen, G., Shen, J., & Cheng, Y. (2026). Less Is More: Vision Representation Compression for Efficient Video Generation with Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(16), 13826-13834. https://doi.org/10.1609/aaai.v40i16.38391
Issue
Section
AAAI Technical Track on Computer Vision XIII