Strong Explanations in Abstract Argumentation

Authors

  • Markus Ulbricht Department of Computer Science, Leipzig University
  • Johannes P. Wallner Institute of Software Technology, Graz University of Technology

DOI:

https://doi.org/10.1609/aaai.v35i7.16805

Keywords:

Argumentation

Abstract

Abstract argumentation constitutes both a major research strand and a key approach that provides the core reasoning engine for a multitude of formalisms in computational argumentation in AI. Reasoning in abstract argumentation is carried out by viewing arguments and their relationships as abstract entities, with argumentation frameworks (AFs) being the most commonly used abstract formalism. Argumentation semantics then drive the reasoning by specifying formal criteria on which sets of arguments, called extensions, can be deemed as jointly acceptable. Such extensions provide a basic way of explaining argumentative acceptance. Inspired by recent research, we present a more general class of explanations: in this paper we propose and study so-called strong explanations for explaining argumentative acceptance in AFs. A strong explanation is a set of arguments such that a target set of arguments is acceptable in each subframework containing the explaining set. We formally show that strong explanations form a larger class than extensions, in particular giving the possibility of having smaller explanations. Moreover, assuming basic properties, we show that any explanation strategy, broadly construed, is a strong explanation. We show that the increase in variety of strong explanations comes with a computational trade-off: we provide an in-depth analysis of the associated complexity, showing a jump in the polynomial hierarchy compared to extensions.

Downloads

Published

2021-05-18

How to Cite

Ulbricht, M., & Wallner, J. P. (2021). Strong Explanations in Abstract Argumentation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 6496-6504. https://doi.org/10.1609/aaai.v35i7.16805

Issue

Section

AAAI Technical Track on Knowledge Representation and Reasoning