Towards Trustworthy Knowledge Graph Reasoning: An Uncertainty Aware Perspective

Authors

  • Bo Ni Vanderbilt University
  • Yu Wang University of Oregon
  • Lu Cheng University of Illinois at Chicago
  • Erik Blasch Air Force Research Lab
  • Tyler Derr Vanderbilt University

DOI:

https://doi.org/10.1609/aaai.v39i12.33353

Abstract

Recently, Knowledge Graphs (KGs) have been successfully coupled with Large Language Models (LLMs) to mitigate their hallucinations and enhance their reasoning capability, e.g., KG-based retrieval-augmented framework. However, current KG-LLM frameworks lack rigorous uncertainty estimation, limiting their reliable deployment in applications where the cost of errors is significant. Directly incorporating uncertainty quantification into KG-LLM frameworks presents a challenge due to their more complex architectures and the intricate interactions between the knowledge graph and language model components. To address this crucial gap, we propose a new trustworthy KG-LLM framework, UAG (Uncertainty Aware Knowledge-Graph Reasoning), which incorporates uncertainty quantification into the KG-LLM framework. We design an uncertainty-aware multi-step reasoning framework that leverages conformal prediction to provide a theoretical guarantee on the prediction set. To manage the error rate of the multi-step process, we additionally introduce an error rate control module to adjust the error rate within the individual components. Extensive experiments show that UAG can achieve any pre-defined coverage rate while reducing the prediction set/interval size by 40% on average over the baselines.

Downloads

Published

2025-04-11

How to Cite

Ni, B., Wang, Y., Cheng, L., Blasch, E., & Derr, T. (2025). Towards Trustworthy Knowledge Graph Reasoning: An Uncertainty Aware Perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 39(12), 12417–12425. https://doi.org/10.1609/aaai.v39i12.33353

Issue

Section

AAAI Technical Track on Data Mining & Knowledge Management II