Enforcement Heuristics for Argumentation with Deep Reinforcement Learning

Authors

  • Dennis Craandijk Utrecht University Netherlands Police
  • Floris Bex Utrecht University Tilburg University

DOI:

https://doi.org/10.1609/aaai.v36i5.20497

Keywords:

Knowledge Representation And Reasoning (KRR), Machine Learning (ML)

Abstract

In this paper, we present a learning-based approach to the symbolic reasoning problem of dynamic argumentation, where the knowledge about attacks between arguments is incomplete or evolving. Specifically, we employ deep reinforcement learning to learn which attack relations between arguments should be added or deleted in order to enforce the acceptability of (a set of) arguments. We show that our Graph Neural Network (GNN) architecture EGNN can learn a near optimal enforcement heuristic for all common argument-fixed enforcement problems, including problems for which no other (symbolic) solvers exist. We demonstrate that EGNN outperforms other GNN baselines and on enforcement problems with high computational complexity performs better than state-of-the-art symbolic solvers with respect to efficiency. Thus, we show our neuro-symbolic approach is able to learn heuristics without the expert knowledge of a human designer and offers a valid alternative to symbolic solvers. We publish our code at https://github.com/DennisCraandijk/DL-Abstract-Argumentation.

Downloads

Published

2022-06-28

How to Cite

Craandijk, D., & Bex, F. (2022). Enforcement Heuristics for Argumentation with Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5573-5581. https://doi.org/10.1609/aaai.v36i5.20497

Issue

Section

AAAI Technical Track on Knowledge Representation and Reasoning