Safe Reinforcement Learning via Shielding under Partial Observability

Authors

  • Steven Carr University of Texas at Austin
  • Nils Jansen Radboud University Nijmegen
  • Sebastian Junges Radboud University Nijmegen
  • Ufuk Topcu University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v37i12.26723

Keywords:

General

Abstract

Safe exploration is a common problem in reinforcement learning (RL) that aims to prevent agents from making disastrous decisions while exploring their environment. A family of approaches to this problem assume domain knowledge in the form of a (partial) model of this environment to decide upon the safety of an action. A so-called shield forces the RL agent to select only safe actions. However, for adoption in various applications, one must look beyond enforcing safety and also ensure the applicability of RL with good performance. We extend the applicability of shields via tight integration with state-of-the-art deep RL, and provide an extensive, empirical study in challenging, sparse-reward environments under partial observability. We show that a carefully integrated shield ensures safety and can improve the convergence rate and final performance of RL agents. We furthermore show that a shield can be used to bootstrap state-of-the-art RL agents: they remain safe after initial learning in a shielded setting, allowing us to disable a potentially too conservative shield eventually.

Downloads

Published

2023-06-26

How to Cite

Carr, S., Jansen, N., Junges, S., & Topcu, U. (2023). Safe Reinforcement Learning via Shielding under Partial Observability. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14748-14756. https://doi.org/10.1609/aaai.v37i12.26723

Issue

Section

AAAI Special Track on Safe and Robust AI