Multi-Agent Undercover Gaming: Hallucination Removal Through Counterfactual Test for Multimodal Reasoning

Authors

  • Dayong Liang South China University of Technology, Guangzhou, China Peng Cheng Laboratory, Shenzhen, China
  • Xiao-Yong Wei Sichuan University, Chengdu, China The Hong Kong Polytechnic University, Hong Kong, China Peng Cheng Laboratory, Shenzhen, China
  • Changmeng Zheng The Hong Kong Polytechnic University, Hong Kong, China

DOI:

https://doi.org/10.1609/aaai.v40i8.37613

Abstract

Hallucination continues to pose a major obstacle in the reasoning capabilities of large language models (LLMs). Although the Multi-Agent Debate (MAD) paradigm offers a promising solution by promoting consensus among multiple agents to enhance reliability, it relies on the unrealistic assumption that all debaters are rational and reflective, which is a condition that may not hold when agents themselves are prone to hallucinations. To address this gap, we introduce the Multi-agent Undercover Gaming (MUG) protocol, inspired by social deduction games like ''Who is Undercover?''. MUG reframes MAD as a process of detecting ''undercover'' agents (those suffering from hallucinations) by employing multimodal counterfactual tests. Specifically, we modify reference images to introduce counterfactual evidence and observe whether agents can accurately identify these changes, providing ground-truth for identifying hallucinating agents and enabling robust, crowd-powered multimodal reasoning. MUG advances MAD protocols along three key dimensions: (1) enabling factual verification beyond statistical consensus through counterfactual testing; (2) introducing cross-evidence reasoning via dynamically modified evidence sources instead of relying on static inputs; and (3) fostering active reasoning, where agents engage in probing discussions rather than passively answering questions. Collectively, these innovations offer a more reliable and effective framework for multimodal reasoning in LLMs.

Downloads

Published

2026-03-14

How to Cite

Liang, D., Wei, X.-Y., & Zheng, C. (2026). Multi-Agent Undercover Gaming: Hallucination Removal Through Counterfactual Test for Multimodal Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(8), 6807–6815. https://doi.org/10.1609/aaai.v40i8.37613

Issue

Section

AAAI Technical Track on Computer Vision V