Probabilistic Shielding for Safe Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v39i15.33767Abstract
In real-life scenarios, a Reinforcement Learning (RL) agent aiming to maximize their reward, must often also behave in a safe manner, including at training time. Thus, much attention in recent years has been given to Safe RL, where an agent aims to learn an optimal policy among all policies that satisfy a given safety constraint. However, strict safety guarantees are often provided through approaches based on linear programming, and thus have limited scaling. In this paper we present a new, scalable method, which enjoys strict formal guarantees for Safe RL, in the case where the safety dynamics of the Markov Decision Process (MDP) are known, and safety is defined as an undiscounted probabilistic avoidance property. Our approach is based on state-augmentation of the MDP, and on the design of a shield that restricts the actions available to the agent. We show that our approach provides a strict formal safety guarantee that the agent stays safe at training and test time. Furthermore, we demonstrate that our approach is viable in practice through experimental evaluation.Published
2025-04-11
How to Cite
Court, E. H.-D. le, Belardinelli, F., & Goodall, A. W. (2025). Probabilistic Shielding for Safe Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15), 16091-16099. https://doi.org/10.1609/aaai.v39i15.33767
Issue
Section
AAAI Technical Track on Machine Learning I