How RL Agents Behave When Their Actions Are Modified


  • Eric D. Langlois University of Toronto Vector Institute DeepMind
  • Tom Everitt DeepMind


Safety, Robustness & Trustworthiness, Human-in-the-loop Machine Learning


Reinforcement learning in complex environments may require supervision to prevent the agent from attempting dangerous actions. As a result of supervisor intervention, the executed action may differ from the action specified by the policy. How does this affect learning? We present the Modified-Action Markov Decision Process, an extension of the MDP model that allows actions to differ from the policy. We analyze the asymptotic behaviours of common reinforcement learning algorithms in this setting and show that they adapt in different ways: some completely ignore modifications while others go to various lengths in trying to avoid action modifications that decrease reward. By choosing the right algorithm, developers can prevent their agents from learning to circumvent interruptions or constraints, and better control agent responses to other kinds of action modification, like self-damage.




How to Cite

Langlois, E. D., & Everitt, T. (2021). How RL Agents Behave When Their Actions Are Modified. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11586-11594. Retrieved from



AAAI Technical Track on Philosophy and Ethics of AI