Seeing Is Believing: Rich-Context Hallucination Detection for MLLMs via Backward Visual Grounding

Authors

  • Pinxue Guo College of Intelligent Robotics and Advanced Manufacturing, Fudan University
  • Chongruo Wu Independent Researcher
  • Xinyu Zhou College of Computational Science and Artificial Intelligence, Fudan University
  • Lingyi Hong College of Computational Science and Artificial Intelligence, Fudan University
  • Zhaoyu Chen College of Intelligent Robotics and Advanced Manufacturing, Fudan University
  • Jinglun Li College of Intelligent Robotics and Advanced Manufacturing, Fudan University
  • Kaixun Jiang College of Intelligent Robotics and Advanced Manufacturing, Fudan University
  • Sen-Ching Samson Cheung Electrical and Computer Engineering, University of Kentucky
  • Wei Zhang College of Computational Science and Artificial Intelligence, Fudan University
  • Wenqiang Zhang College of Intelligent Robotics and Advanced Manufacturing, Fudan University College of Computational Science and Artificial Intelligence, Fudan University

DOI:

https://doi.org/10.1609/aaai.v40i37.40345

Abstract

Multimodal Large Language Models (MLLMs) have unlocked powerful cross-modal capabilities, but still significantly suffer from hallucinations. As such, accurate detection of hallucinations in MLLMs is imperative for ensuring their reliability in practical applications. To this end, guided by the principle of “Seeing is Believing”, we introduce VBackChecker, a novel reference-free hallucination detection framework that verifies the consistency of MLLM-generated responses with visual inputs, by leveraging a pixel-level Grounding LLM equipped with reasoning and referring segmentation capabilities. This referencefree framework not only effectively handles rich-context scenarios, but also offers interpretability. To facilitate this, an innovative pipeline is accordingly designed for generating instruction-tuning data (R-Instruct), featuring richcontext descriptions, grounding masks, and hard negative samples. We further establish R 2 -HalBench, a new hallucination benchmark for MLLMs, which, unlike previous benchmarks, encompasses real-world, rich-context descriptions from 18 MLLMs with high-quality annotations, spanning diverse object-, attribute-, and relationship-level details. VBackChecker outperforms prior complex frameworks and achieves state-of-the-art performance on R^2 -HalBench, even rivaling GPT-4o’s capabilities in hallucination detection. It also surpasses prior methods in the pixel-level grounding task, achieving over a 10% improvement.

Downloads

Published

2026-03-14

How to Cite

Guo, P., Wu, C., Zhou, X., Hong, L., Chen, Z., Li, J., … Zhang, W. (2026). Seeing Is Believing: Rich-Context Hallucination Detection for MLLMs via Backward Visual Grounding. Proceedings of the AAAI Conference on Artificial Intelligence, 40(37), 30871–30879. https://doi.org/10.1609/aaai.v40i37.40345

Issue

Section

AAAI Technical Track on Natural Language Processing II