Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information Flow

Authors

  • Jiaqi Bai Cyberspace Institute of Advanced Technology, Guangzhou University, China Huangpu Research School of Guangzhou University, China
  • Hongcheng Guo CCSE, Beihang University, China
  • Zhongyuan Peng University of the Chinese Academy of Sciences, China
  • Jian Yang CCSE, Beihang University, China
  • Zhoujun Li CCSE, Beihang University, China
  • Mohan Li Cyberspace Institute of Advanced Technology, Guangzhou University, China Huangpu Research School of Guangzhou University, China
  • Zhihong Tian Cyberspace Institute of Advanced Technology, Guangzhou University, China Huangpu Research School of Guangzhou University, China

DOI:

https://doi.org/10.1609/aaai.v39i22.34512

Abstract

Large vision-language models show tremendous potential in understanding visual information through human languages. However, they are prone to suffer from object hallucination, i.e., the generated image descriptions contain objects that do not exist in the image. In this paper, we reveal that object hallucination can be attributed to overconfidence in irrelevant visual features when soft visual tokens map to the LLM's word embedding space. Specifically, by figuring out the semantic similarity between visual tokens and LLM's word embedding, we observe that the smoothness of similarity distribution strongly correlates with the emergence of object hallucinations. To mitigate hallucinations, we propose using the Variational Information Bottleneck (VIB) to alleviate overconfidence by introducing stochastic noise, facilitating the constraining of irrelevant information. Furthermore, we propose an entropy-based noise-controlling strategy to enable the injected noise to be adaptively constrained regarding the smoothness of the similarity distribution. We adapt the proposed AdaVIB across distinct model architectures. Experimental results demonstrate that the proposed AdaVIB mitigates object hallucinations by effectively alleviating the overconfidence in irrelevant visual features, with consistent improvements on two object hallucination benchmarks.

Downloads

Published

2025-04-11

How to Cite

Bai, J., Guo, H., Peng, Z., Yang, J., Li, Z., Li, M., & Tian, Z. (2025). Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information Flow. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 23442-23450. https://doi.org/10.1609/aaai.v39i22.34512

Issue

Section

AAAI Technical Track on Natural Language Processing I