Mitigating Adversarial Norm Training with Moral Axioms

Authors

  • Taylor Olson Northwestern University
  • Kenneth D. Forbus Northwestern University

DOI:

https://doi.org/10.1609/aaai.v37i10.26402

Keywords:

PEAI: Morality and Value-Based AI, ML: Adversarial Learning & Robustness, PEAI: Safety, Robustness & Trustworthiness, KRR: Reasoning with Beliefs, PEAI: AI and Epistemology, KRR: Belief Change, CMS: Social Cognition and Interaction, RU: Uncertainty Representations

Abstract

This paper addresses the issue of adversarial attacks on ethical AI systems. We investigate using moral axioms and rules of deontic logic in a norm learning framework to mitigate adversarial norm training. This model of moral intuition and construction provides AI systems with moral guard rails yet still allows for learning conventions. We evaluate our approach by drawing inspiration from a study commonly used in moral development research. This questionnaire aims to test an agent's ability to reason to moral conclusions despite opposed testimony. Our findings suggest that our model can still correctly evaluate moral situations and learn conventions in an adversarial training environment. We conclude that adding axiomatic moral prohibitions and deontic inference rules to a norm learning model makes it less vulnerable to adversarial attacks.

Downloads

Published

2023-06-26

How to Cite

Olson, T., & Forbus, K. D. (2023). Mitigating Adversarial Norm Training with Moral Axioms. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 11882-11889. https://doi.org/10.1609/aaai.v37i10.26402

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI