ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models

Authors

  • Yeji Park Sogang University
  • Deokyeong Lee Sogang University
  • Junsuk Choe Sogang University
  • Buru Chang Korea University

DOI:

https://doi.org/10.1609/aaai.v39i6.32689

Abstract

Hallucinations in Multimodal Large Language Models (MLLMs) where generated responses fail to accurately reflect the given image pose a significant challenge to their reliability. To address this, we introduce ConVis, a novel training-free contrastive decoding method. ConVis leverages a text-to-image (T2I) generation model to semantically reconstruct the given image from hallucinated captions. By comparing the contrasting probability distributions produced by the original and reconstructed images, ConVis enables MLLMs to capture visual contrastive signals that penalize hallucination generation. Notably, this method operates purely within the decoding process, eliminating the need for additional data or model updates. Our extensive experiments on five popular benchmarks demonstrate that ConVis effectively reduces hallucinations across various MLLMs, highlighting its potential to enhance model reliability.

Downloads

Published

2025-04-11

How to Cite

Park, Y., Lee, D., Choe, J., & Chang, B. (2025). ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(6), 6434-6442. https://doi.org/10.1609/aaai.v39i6.32689

Issue

Section

AAAI Technical Track on Computer Vision V