Verifiable Machine Ethics in Changing Contexts

Authors

  • Louise A. Dennis University of Manchester
  • Martin Mose Bentzen Technical University of Denmark
  • Felix Lindner Ulm University
  • Michael Fisher University of Manchester

DOI:

https://doi.org/10.1609/aaai.v35i13.17366

Keywords:

Morality & Value-based AI

Abstract

Many systems proposed for the implementation of ethical reasoning involve an encoding of user values as a set of rules or a model. We consider the question of how changes of context affect these encodings. We propose the use of a reasoning cycle, in which information about the ethical reasoner's context is imported in a logical form, and we propose that context-specific aspects of an ethical encoding be prefaced by a guard formula. This guard formula should evaluate to true when the reasoner is in the appropriate context and the relevant parts of the reasoner's rule set or model should be updated accordingly. This architecture allows techniques for the model-checking of agent-based autonomous systems to be used to verify that all contexts respect key stakeholder values. We implement this framework using the hybrid ethical reasoning agents system (HERA) and the model-checking agent programming languages (MCAPL) framework.

Downloads

Published

2021-05-18

How to Cite

Dennis, L. A., Bentzen, M. M., Lindner, F., & Fisher, M. (2021). Verifiable Machine Ethics in Changing Contexts. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11470-11478. https://doi.org/10.1609/aaai.v35i13.17366

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI