Cause and Effect: Can Large Language Models Truly Understand Causality?

Authors

  • Swagata Ashwani Carnegie Mellon University
  • Kshiteesh Hegde Rensselaer Polytechnic Institute
  • Nishith Reddy Mannuru University of North Texas
  • Dushyant Singh Sengar Independent Researcher
  • Mayank Jindal Independent Researcher
  • Krishna Chaitanya Rao Kathala University of Massachusetts
  • Dishant Banga Bridgetree
  • Vinija Jain Stanford University
  • Aman Chadha Stanford University Amazon GenAI

DOI:

https://doi.org/10.1609/aaaiss.v4i1.31764

Abstract

With the rise of Large Language Models (LLMs), it has become crucial to understand their capabilities and limitations in deciphering and explaining the complex web of causal relationships that language entails. Current methods use either explicit or implicit causal reasoning, yet there is a strong need for a unified approach combining both to tackle a wide array of causal relationships more effectively. This research proposes a novel architecture called Context-Aware Reasoning Enhancement with Counterfactual Analysis (CARE-CA) to enhance causal reasoning and explainability. The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through LLMs. Our framework goes one step further with a layer of counterfactual explanations to accentuate LLMs' understanding of causality. The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification, and counterfactual reasoning. The counterfactual sentences add explicit knowledge of `not caused by' scenarios. By combining these powerful modules, our model aims to provide a deeper understanding of causal relationships, enabling enhanced interpretability. Evaluation of benchmark datasets shows improved performance across all metrics, such as accuracy, precision, recall, and F1 scores. We also present CausalNet, a novel dataset specifically curated to benchmark and enhance the causal reasoning capabilities of LLMs. This dataset is accompanied by code designed to facilitate further research in this domain.

Downloads

Published

2024-11-08

How to Cite

Ashwani, S., Hegde, K., Reddy Mannuru, N., Singh Sengar, D., Jindal, M., Chaitanya Rao Kathala, K., … Chadha, A. (2024). Cause and Effect: Can Large Language Models Truly Understand Causality?. Proceedings of the AAAI Symposium Series, 4(1), 2–9. https://doi.org/10.1609/aaaiss.v4i1.31764

Issue

Section

AI Trustworthiness and Risk Assessment for Challenging Contexts (ATRACC)