Argumentative Large Language Models for Explainable and Contestable Claim Verification

Authors

  • Gabriel Freedman Imperial College London
  • Adam Dejl Imperial College London
  • Deniz Gorur Imperial College London
  • Xiang Yin Imperial College London
  • Antonio Rago Imperial College London
  • Francesca Toni Imperial College London

DOI:

https://doi.org/10.1609/aaai.v39i14.33637

Abstract

The profusion of knowledge encoded in large language models (LLMs) and their ability to apply this knowledge zero-shot in a range of settings makes them promising candidates for use in decision-making. However, they are currently limited by their inability to provide outputs which can be faithfully explained and effectively contested to correct mistakes. In this paper, we attempt to reconcile these strengths and weaknesses by introducing argumentative LLMs (ArgLLMs), a method for augmenting LLMs with argumentative reasoning. Concretely, ArgLLMs construct argumentation frameworks, which then serve as the basis for formal reasoning in support of decision-making. The interpretable nature of these argumentation frameworks and formal reasoning means that any decision made by ArgLLMs may be explained and contested. We evaluate ArgLLMs’ performance experimentally in comparison with state-of-the-art techniques, in the context of the decision-making task of claim verification. We also define novel properties to characterise contestability and assess ArgLLMs formally in terms of these properties.

Published

2025-04-11

How to Cite

Freedman, G., Dejl, A., Gorur, D., Yin, X., Rago, A., & Toni, F. (2025). Argumentative Large Language Models for Explainable and Contestable Claim Verification. Proceedings of the AAAI Conference on Artificial Intelligence, 39(14), 14930-14939. https://doi.org/10.1609/aaai.v39i14.33637

Issue

Section

AAAI Technical Track on Knowledge Representation and Reasoning