Look Closer! An Adversarial Parametric Editing Framework for Hallucination Mitigation in VLMs

Authors

  • Jiayu Hu Chongqing University
  • Beibei Li Chongqing University
  • Jiangwei Xia Chongqing University
  • Yanjun Qin Xinjiang University
  • Bing Ji Chongqing University
  • Zhongshi He Chongqing University

DOI:

https://doi.org/10.1609/aaai.v40i26.39336

Abstract

While Vision-Language Models (VLMs) have garnered increasing attention in the AI community due to their promising practical applications, they exhibit persistent hallucination issues, generating outputs misaligned with visual inputs. Recent studies attribute these hallucinations to VLMs' over-reliance on linguistic priors and insufficient visual feature integration, proposing heuristic decoding calibration strategies to mitigate them. However, the non-trainable nature of these strategies inherently limits their optimization potential. To this end, we propose an adversarial parametric editing framework for Hallucination mitigation in VLMs, which follows an Activate-Locate-Edit Adversarially paradigm. Specifically, we first construct an activation dataset that comprises grounded responses (positive samples attentively anchored in visual features) and hallucinatory responses (negative samples reflecting LLM prior bias and internal knowledge artifacts). Next, we identify critical hallucination-prone parameter clusters by analyzing differential hidden states of response pairs. Then, these clusters are fine-tuned using prompts injected with adversarial prefixes optimized via prompt tuning to maximize visual neglect, thereby forcing the model to prioritize visual evidence over inherent parametric biases. Evaluations on both generative and discriminative VLM tasks demonstrate the significant effectiveness of ALEAHallu in alleviating hallucinations.

Downloads

Published

2026-03-14

How to Cite

Hu, J., Li, B., Xia, J., Qin, Y., Ji, B., & He, Z. (2026). Look Closer! An Adversarial Parametric Editing Framework for Hallucination Mitigation in VLMs. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 21840–21848. https://doi.org/10.1609/aaai.v40i26.39336

Issue

Section

AAAI Technical Track on Machine Learning III