Next Patch Prediction for AutoRegressive Visual Generation

Authors

  • Yatian Pang National University of Singapore, Peking University
  • Peng Jin Peking University
  • Shuo Yang Peking University
  • Bin Zhu Peking University
  • Bin Lin Peking University
  • Chaoran Feng Peking University
  • Zhenyu Tang Peking University
  • Liuhan Chen Peking University
  • Francis E. H. Tay National University of Singapore
  • Ser-Nam Lim University of Central Florida, Everlyn
  • Harry Yang Hong Kong University of Science and Technology, Everlyn
  • Li Yuan Peking University, PengCheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v40i10.37774

Abstract

Autoregressive models, built based on the Next Token Prediction (NTP) paradigm, show great potential in developing a unified framework that integrates both language and vision tasks. Pioneering works introduce NTP to autoregressive visual generation tasks. In this work, we rethink the NTP for autoregressive image generation and extend it to a novel Next Patch Prediction (NPP) paradigm. Our key idea is to group and aggregate image tokens into patch tokens with higher information density. By using patch tokens as a more compact input sequence, the autoregressive model is trained to predict the next patch, significantly reducing computational costs. To further exploit the natural hierarchical structure of image data, we propose a multi-scale coarse-to-fine patch grouping strategy. With this strategy, the training process begins with a large patch size and ends with vanilla NTP where the patch size is 1x1, thus maintaining the original inference process without modifications. Extensive experiments across a diverse range of model sizes demonstrate that NPP could reduce the training cost to around 0.6 times while improving image generation quality by up to 1.0 FID score on the ImageNet 256x256 generation benchmark. Notably, our method retains the original autoregressive model architecture without introducing additional trainable parameters or specifically designing a custom image tokenizer, offering a flexible and plug-and-play solution for enhancing autoregressive visual generation.

Downloads

Published

2026-03-14

How to Cite

Pang, Y., Jin, P., Yang, S., Zhu, B., Lin, B., Feng, C., Tang, Z., Chen, L., Tay, F. E. H., Lim, S.-N., Yang, H., & Yuan, L. (2026). Next Patch Prediction for AutoRegressive Visual Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(10), 8260-8268. https://doi.org/10.1609/aaai.v40i10.37774

Issue

Section

AAAI Technical Track on Computer Vision VII