How RL Agents Behave When Their Actions Are Modified

Authors

  • Eric D. Langlois University of Toronto Vector Institute DeepMind
  • Tom Everitt DeepMind

DOI:

https://doi.org/10.1609/aaai.v35i13.17378

Keywords:

Safety, Robustness & Trustworthiness, Human-in-the-loop Machine Learning

Abstract

Reinforcement learning in complex environments may require supervision to prevent the agent from attempting dangerous actions. As a result of supervisor intervention, the executed action may differ from the action specified by the policy. How does this affect learning? We present the Modified-Action Markov Decision Process, an extension of the MDP model that allows actions to differ from the policy. We analyze the asymptotic behaviours of common reinforcement learning algorithms in this setting and show that they adapt in different ways: some completely ignore modifications while others go to various lengths in trying to avoid action modifications that decrease reward. By choosing the right algorithm, developers can prevent their agents from learning to circumvent interruptions or constraints, and better control agent responses to other kinds of action modification, like self-damage.

Downloads

Published

2021-05-18

How to Cite

Langlois, E. D., & Everitt, T. (2021). How RL Agents Behave When Their Actions Are Modified. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11586-11594. https://doi.org/10.1609/aaai.v35i13.17378

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI