Learning with Safety Constraints: Sample Complexity of Reinforcement Learning for Constrained MDPs

Authors

  • Aria HasanzadeZonuzy Texas A&M University
  • Archana Bura Texas A&M University
  • Dileep Kalathil Texas A&M University
  • Srinivas Shakkottai Texas A&M University

DOI:

https://doi.org/10.1609/aaai.v35i9.16937

Keywords:

Reinforcement Learning

Abstract

Many physical systems have underlying safety considerations that require that the policy employed ensures the satisfaction of a set of constraints. The analytical formulation usually takes the form of a Constrained Markov Decision Process (CMDP). We focus on the case where the CMDP is unknown, and RL algorithms obtain samples to discover the model and compute an optimal constrained policy. Our goal is to characterize the relationship between safety constraints and the number of samples needed to ensure a desired level of accuracy---both objective maximization and constraint satisfaction---in a PAC sense. We explore two classes of RL algorithms, namely, (i) a generative model based approach, wherein samples are taken initially to estimate a model, and (ii) an online approach, wherein the model is updated as samples are obtained. Our main finding is that compared to the best known bounds of the unconstrained regime, the sample complexity of constrained RL algorithms are increased by a factor that is logarithmic in the number of constraints, which suggests that the approach may be easily utilized in real systems.

Downloads

Published

2021-05-18

How to Cite

HasanzadeZonuzy, A., Bura, A., Kalathil, D., & Shakkottai, S. (2021). Learning with Safety Constraints: Sample Complexity of Reinforcement Learning for Constrained MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7667-7674. https://doi.org/10.1609/aaai.v35i9.16937

Issue

Section

AAAI Technical Track on Machine Learning II