Faithful Reasoning over Scientific Claims

Authors

  • Neşet Özkan Tan The University of Auckland
  • Niket Tandon Allen Institute for AI
  • David Wadden Allen Institute for AI
  • Oyvind Tafjord Allen Institute for AI
  • Mark Gahegan The University of Auckland
  • Michael Witbrock The University of Auckland

DOI:

https://doi.org/10.1609/aaaiss.v3i1.31209

Keywords:

Trustworthy Claim Verification, Knowledge-based Explainable AI, Fact-checking

Abstract

Claim verification in scientific domains requires models that faithfully incorporate relevant knowledge from the ever-growing, vast existing literature. Unfaithful claim verifications can lead to misinformation such as those observed during the COVID-19 pandemic. Fact-checking systems often fail to capture the complex relationship between claims and evidence, especially with ambiguous claims and implicit assumptions. Relying only on current LLMs poses challenges due to hallucinations and information traceability issues. To address these challenges, our approach considers multiple viewpoints onto the scientific literature, enabling the assessment of contradictory arguments and implicit assumptions. Our proposed inference method adds faithful reasoning to large language models by distilling information from diverse, relevant scientific abstracts. This method provides a verdict label that can be weighted by the reputation of the scientific articles and an explanation that can be traced back to sources. Our findings demonstrate that humans not only perceive our explanation to be significantly superior to the off-the-shelf model, but they also evaluate it as faithfully enabling the tracing of evidence back to its original sources.

Downloads

Published

2024-05-20

Issue

Section

Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge