Not All Inconsistency Is Equal: Decomposing LVLM Uncertainty into Belief Divergence and Belief Conflict
DOI:
https://doi.org/10.1609/aaai.v40i30.39727Abstract
Uncertainty Quantification (UQ) is critical for detecting hallucinations in black-box Large Vision-Language Models (LVLMs). However, prevailing methods like Discrete Semantic Entropy (DSE) are unreliable, as their scores are primarily dominated by the number of semantic clusters. This renders them incapable of distinguishing between benign semantic ambiguity (varied but coherent responses) and severe belief conflict (contradictory responses). We address this limitation by proposing a novel framework rooted in Dempster-Shafer theory of evidence, built on the premise that not all inconsistency is equal. Our method decomposes uncertainty into two complementary metrics: Belief Divergence, which quantifies ambiguity by measuring the separation between viewpoints, and Belief Conflict, which captures direct logical contradictions. Extensive experiments demonstrate that our framework provides a more reliable measure of uncertainty.Downloads
Published
2026-03-14
How to Cite
Shi, J., Yue, X., Liu, W., Chen, Y., & Dong, F. (2026). Not All Inconsistency Is Equal: Decomposing LVLM Uncertainty into Belief Divergence and Belief Conflict. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25339–25347. https://doi.org/10.1609/aaai.v40i30.39727
Issue
Section
AAAI Technical Track on Machine Learning VII