Not All Inconsistency Is Equal: Decomposing LVLM Uncertainty into Belief Divergence and Belief Conflict

Authors

  • Jie Shi School of Computer Engineering and Science, Shanghai University, Shanghai, China
  • Xiaodong Yue Institute of Artificial Intelligence, Shanghai University, Shanghai, China School of Future Technology, Shanghai University, Shanghai, China
  • Wei Liu School of Computer Science and Technology, Tongji University, Shanghai, China
  • Yufei Chen School of Computer Science and Technology, Tongji University, Shanghai, China
  • Feifan Dong School of Future Technology, Shanghai University, Shanghai, China

DOI:

https://doi.org/10.1609/aaai.v40i30.39727

Abstract

Uncertainty Quantification (UQ) is critical for detecting hallucinations in black-box Large Vision-Language Models (LVLMs). However, prevailing methods like Discrete Semantic Entropy (DSE) are unreliable, as their scores are primarily dominated by the number of semantic clusters. This renders them incapable of distinguishing between benign semantic ambiguity (varied but coherent responses) and severe belief conflict (contradictory responses). We address this limitation by proposing a novel framework rooted in Dempster-Shafer theory of evidence, built on the premise that not all inconsistency is equal. Our method decomposes uncertainty into two complementary metrics: Belief Divergence, which quantifies ambiguity by measuring the separation between viewpoints, and Belief Conflict, which captures direct logical contradictions. Extensive experiments demonstrate that our framework provides a more reliable measure of uncertainty.

Published

2026-03-14

How to Cite

Shi, J., Yue, X., Liu, W., Chen, Y., & Dong, F. (2026). Not All Inconsistency Is Equal: Decomposing LVLM Uncertainty into Belief Divergence and Belief Conflict. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25339–25347. https://doi.org/10.1609/aaai.v40i30.39727

Issue

Section

AAAI Technical Track on Machine Learning VII