Evaluating Model-Free Reinforcement Learning toward Safety-Critical Tasks

Authors

  • Linrui Zhang Tsinghua University
  • Qin Zhang Tsinghua University
  • Li Shen JD Explore Academy
  • Bo Yuan Qianyuan Institute of Sciences
  • Xueqian Wang Tsinghua University
  • Dacheng Tao JD Explore Academy

DOI:

https://doi.org/10.1609/aaai.v37i12.26786

Keywords:

General

Abstract

Safety comes first in many real-world applications involving autonomous agents. Despite a large number of reinforcement learning (RL) methods focusing on safety-critical tasks, there is still a lack of high-quality evaluation of those algorithms that adheres to safety constraints at each decision step under complex and unknown dynamics. In this paper, we revisit prior work in this scope from the perspective of state-wise safe RL and categorize them as projection-based, recovery-based, and optimization-based approaches, respectively. Furthermore, we propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection. This novel technique explicitly enforces hard constraints via the deep unrolling architecture and enjoys structural advantages in navigating the trade-off between reward improvement and constraint satisfaction. To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit, a toolkit that provides off-the-shelf interfaces and evaluation utilities for safety-critical tasks. We then perform a comparative study of the involved algorithms on six benchmarks ranging from robotic control to autonomous driving. The empirical results provide an insight into their applicability and robustness in learning zero-cost-return policies without task-dependent handcrafting. The project page is available at https://sites.google.com/view/saferlkit.

Downloads

Published

2023-06-26

How to Cite

Zhang, L., Zhang, Q., Shen, L., Yuan, B., Wang, X., & Tao, D. (2023). Evaluating Model-Free Reinforcement Learning toward Safety-Critical Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15313-15321. https://doi.org/10.1609/aaai.v37i12.26786

Issue

Section

AAAI Special Track on Safe and Robust AI