WCSAC: Worst-Case Soft Actor Critic for Safety-Constrained Reinforcement Learning

Authors

  • Qisong Yang Delft University of Technology
  • Thiago D. Simão Delft University of Technology
  • Simon H Tindemans Delft University of Technology
  • Matthijs T. J. Spaan Delft University of Technology

DOI:

https://doi.org/10.1609/aaai.v35i12.17272

Keywords:

Reinforcement Learning

Abstract

Safe exploration is regarded as a key priority area for reinforcement learning research. With separate reward and safety signals, it is natural to cast it as constrained reinforcement learning, where expected long-term costs of policies are constrained. However, it can be hazardous to set constraints on the expected safety signal without considering the tail of the distribution. For instance, in safety-critical domains, worst-case analysis is required to avoid disastrous results. We present a novel reinforcement learning algorithm called Worst-Case Soft Actor Critic, which extends the Soft Actor Critic algorithm with a safety critic to achieve risk control. More specifically, a certain level of conditional Value-at-Risk from the distribution is regarded as a safety measure to judge the constraint satisfaction, which guides the change of adaptive safety weights to achieve a trade-off between reward and safety. As a result, we can optimize policies under the premise that their worst-case performance satisfies the constraints. The empirical analysis shows that our algorithm attains better risk control compared to expectation-based methods.

Downloads

Published

2021-05-18

How to Cite

Yang, Q., Simão, T. D., Tindemans, S. H., & Spaan, M. . T. J. (2021). WCSAC: Worst-Case Soft Actor Critic for Safety-Constrained Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10639-10646. https://doi.org/10.1609/aaai.v35i12.17272

Issue

Section

AAAI Technical Track on Machine Learning V