URaG: Unified Retrieval and Generation in Multimodal LLMs for Efficient Long Document Understanding

Authors

  • Yongxin Shi South China University of Technology
  • Jiapeng Wang South China University of Technology
  • Zeyu Shan South China University of Technology
  • Dezhi Peng Huawei Technologies Co., Ltd.
  • Zening Lin South China University of Technology
  • Lianwen Jin South China University of Technology

DOI:

https://doi.org/10.1609/aaai.v40i30.39729

Abstract

Recent multimodal large language models (MLLMs) still struggle with long document understanding due to two fundamental challenges: information interference from abundant irrelevant content, and the quadratic computational cost of Transformer-based architectures. Existing approaches primarily fall into two categories: token compression, which sacrifices fine-grained details; and introducing external retrievers, which increase system complexity and prevent end-to-end optimization. To address these issues, we conduct an in-depth analysis and observe that MLLMs exhibit a human-like coarse-to-fine reasoning pattern: early Transformer layers attend broadly across the document, while deeper layers focus on relevant evidence pages. Motivated by this insight, we posit that the inherent evidence localization capabilities of MLLMs can be explicitly leveraged to perform retrieval during the reasoning process, facilitating efficient long document understanding. To this end, we propose URaG, a simple-yet-effective framework that Unifies Retrieval and Generation within a single MLLM. URaG introduces a lightweight cross-modal retrieval module that converts the early Transformer layers into an efficient evidence selector, identifying and preserving the most relevant pages while discarding irrelevant content. This design enables the deeper layers to concentrate computational resources on pertinent information, improving both accuracy and efficiency. Extensive experiments demonstrate that URaG achieves state-of-the-art performance while reducing computational overhead by 44-56%.

Published

2026-03-14

How to Cite

Shi, Y., Wang, J., Shan, Z., Peng, D., Lin, Z., & Jin, L. (2026). URaG: Unified Retrieval and Generation in Multimodal LLMs for Efficient Long Document Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25357–25365. https://doi.org/10.1609/aaai.v40i30.39729

Issue

Section

AAAI Technical Track on Machine Learning VII