Semantic Volume: Quantifying and Detecting Both External and Internal Uncertainty in LLMs

Authors

  • Xiaomin Li Harvard University
  • Zhou Yu Amazon
  • Ziji Zhang Amazon
  • Yingying Zhuang Amazon
  • Swair Shah Amazon
  • Narayanan Sadagopan Amazon
  • Anurag Beniwal Amazon

DOI:

https://doi.org/10.1609/aaai.v40i37.40443

Abstract

Large language models (LLMs) have demonstrated remarkable performance across diverse tasks by encoding vast amounts of factual knowledge. However, they are still prone to hallucinations, generating incorrect or misleading information, often accompanied by high uncertainty. Existing methods for hallucination detection primarily focus on quantifying internal uncertainty, which arises from missing or conflicting knowledge within the model. However, hallucinations can also stem from external uncertainty, where ambiguous user queries lead to multiple possible interpretations. In this work, we introduce **Semantic Volume**, a novel mathematical measure for quantifying both external and internal uncertainty in LLMs. Our approach perturbs queries and responses, embeds them in a semantic space, and computes the Gram matrix determinant of the embedding vectors, capturing their dispersion as a measure of uncertainty. Our framework provides a generalizable and unsupervised uncertainty detection method without requiring internal access to LLMs. We conduct extensive experiments on both external and internal uncertainty detections, demonstrating that our Semantic Volume method consistently outperforms existing baselines in both tasks. Additionally, we provide theoretical insights linking our measure to differential entropy, unifying and extending previous sampling-based uncertainty measures such as the semantic entropy. Semantic Volume is shown to be a robust and interpretable approach to improving the reliability of LLMs by systematically detecting uncertainty in both user queries and model responses.

Downloads

Published

2026-03-14

How to Cite

Li, X., Yu, Z., Zhang, Z., Zhuang, Y., Shah, S., Sadagopan, N., & Beniwal, A. (2026). Semantic Volume: Quantifying and Detecting Both External and Internal Uncertainty in LLMs. Proceedings of the AAAI Conference on Artificial Intelligence, 40(37), 31751-31759. https://doi.org/10.1609/aaai.v40i37.40443

Issue

Section

AAAI Technical Track on Natural Language Processing II