Mitigating Hallucinations in Large Language Models via Causal Reasoning

Authors

  • Yuangang Li University of Southern California
  • Yiqing Shen Johns Hopkins University
  • Yi Nian University of Southern California
  • Jiechao Gao Stanford University
  • Ziyi Wang University of Maryland, College Park
  • Chenxiao Yu University of Southern California
  • Li Li University of Southern California
  • Jie Wang Stanford University
  • Xiyang Hu Arizona State University
  • Yue Zhao University of Southern California

DOI:

https://doi.org/10.1609/aaai.v40i38.40454

Abstract

Large language models (LLMs) exhibit logically inconsistent hallucinations that appear coherent yet violate reasoning principles, with recent research suggesting an inverse relationship between causal reasoning capabilities and such hallucinations. However, existing reasoning approaches in LLMs, such as Chain-of-Thought (CoT) and its graph-based variants, operate at the linguistic token level rather than modeling the underlying causal relationships between variables, lacking the ability to represent conditional independencies or satisfy causal identification assumptions. To bridge this gap, we introduce causal-DAG construction and reasoning (CDCR-SFT), a supervised fine-tuning framework that trains LLMs to explicitly construct variable-level directed acyclic graph (DAG) and then perform reasoning over it. Moreover, we present a dataset comprising 25,368 samples (CausalDR), where each sample includes an input question, explicit causal DAG, graph-based reasoning trace, and validated answer. Experiments on four LLMs across eight tasks show that CDCR-SFT improves the causal reasoning capability with the state-of-the-art 95.33% accuracy on CLADDER (surpassing human performance of 94.8% for the first time) and reduces the hallucination on HaluEval with 10% improvements. It demonstrates that explicit causal structure modeling in LLMs can effectively mitigate logical inconsistencies in LLM outputs.

Downloads

Published

2026-03-14

How to Cite

Li, Y., Shen, Y., Nian, Y., Gao, J., Wang, Z., Yu, C., … Zhao, Y. (2026). Mitigating Hallucinations in Large Language Models via Causal Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 31852–31860. https://doi.org/10.1609/aaai.v40i38.40454

Issue

Section

AAAI Technical Track on Natural Language Processing III