Exploring Safer Behaviors for Deep Reinforcement Learning

Authors

  • Enrico Marchesini University of Verona
  • Davide Corsi University of Verona
  • Alessandro Farinelli University of Verona

DOI:

https://doi.org/10.1609/aaai.v36i7.20737

Keywords:

Machine Learning (ML), Search And Optimization (SO)

Abstract

We consider Reinforcement Learning (RL) problems where an agent attempts to maximize a reward signal while minimizing a cost function that models unsafe behaviors. Such formalization is addressed in the literature using constrained optimization on the cost, limiting the exploration and leading to a significant trade-off between cost and reward. In contrast, we propose a Safety-Oriented Search that complements Deep RL algorithms to bias the policy toward safety within an evolutionary cost optimization. We leverage evolutionary exploration benefits to design a novel concept of safe mutations that use visited unsafe states to explore safer actions. We further characterize the behaviors of the policies over desired specifics with a sample-based bound estimation, which makes prior verification analysis tractable in the training loop. Hence, driving the learning process towards safer regions of the policy space. Empirical evidence on the Safety Gym benchmark shows that we successfully avoid drawbacks on the return while improving the safety of the policy.

Downloads

Published

2022-06-28

How to Cite

Marchesini, E., Corsi, D., & Farinelli, A. (2022). Exploring Safer Behaviors for Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7701-7709. https://doi.org/10.1609/aaai.v36i7.20737

Issue

Section

AAAI Technical Track on Machine Learning II