Look as You Think: Unifying Reasoning and Visual Evidence Attribution for Verifiable Document RAG via Reinforcement Learning

Authors

  • Shuochen Liu University of Science and Technology of China
  • Pengfei Luo University of Science and Technology of China
  • Chao Zhang University of Science and Technology of China
  • Yuhao Chen University of Science and Technology of China
  • Haotian Zhang University of Science and Technology of China
  • Qi Liu University of Science and Technology of China
  • Xin Kou University of Science and Technology of China
  • Tong Xu University of Science and Technology of China
  • Enhong Chen University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v40i38.40488

Abstract

Aiming to identify precise evidence sources from visual documents, visual evidence attribution for visual document retrieval–augmented generation (VD-RAG) ensures reliable and verifiable predictions from vision-language models (VLMs) in multimodal question answering. Most existing methods adopt end-to-end training to facilitate intuitive answer verification. However, they lack fine-grained supervision and progressive traceability throughout the reasoning process. In this paper, we introduce the Chain-of-Evidence (CoE) paradigm for VD-RAG. CoE unifies Chain-of-Thought (CoT) reasoning and visual evidence attribution by grounding reference elements in reasoning steps to specific regions with bounding boxes and page indexes. To enable VLMs to generate such evidence-grounded reasoning, we propose Look As You Think (LAT), a reinforcement learning framework that trains models to produce verifiable reasoning paths with consistent attribution. During training, LAT evaluates the attribution consistency of each evidence region and provides rewards only when the CoE trajectory yields correct answers, encouraging process-level self-verification. Experiments on vanilla Qwen2.5-VL-7B-Instruct with Paper‑ and Wiki‑VISA benchmarks show that LAT consistently improves the vanilla model in both single- and multi-image settings, yielding average gains of 8.23% in soft exact match (EM) and 47.0% in IoU@0.5. Meanwhile, LAT not only outperforms the supervised fine-tuning baseline, which is trained to directly produce answers with attribution, but also exhibits stronger generalization across domains.

Downloads

Published

2026-03-14

How to Cite

Liu, S., Luo, P., Zhang, C., Chen, Y., Zhang, H., Liu, Q., Kou, X., Xu, T., & Chen, E. (2026). Look as You Think: Unifying Reasoning and Visual Evidence Attribution for Verifiable Document RAG via Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 32159-32167. https://doi.org/10.1609/aaai.v40i38.40488

Issue

Section

AAAI Technical Track on Natural Language Processing III