A Sample-Efficient Algorithm for Episodic Finite-Horizon MDP with Constraints

Authors

  • Krishna C. Kalagarla University of Southern California
  • Rahul Jain University of Southern California
  • Pierluigi Nuzzo University of Southern California

DOI:

https://doi.org/10.1609/aaai.v35i9.16979

Keywords:

Reinforcement Learning, Online Learning & Bandits

Abstract

Constrained Markov decision processes (CMDPs) formalize sequential decision-making problems whose objective is to minimize a cost function while satisfying constraints on various cost functions. In this paper, we consider the setting of episodic fixed-horizon CMDPs. We propose an online algorithm which leverages the linear programming formulation of repeated optimistic planning for finite-horizon CMDP to provide a probably approximately correctness (PAC) guarantee on the number of episodes needed to ensure a near optimal policy, i.e., with resulting objective value close to that of the optimal value and satisfying the constraints within low tolerance, with high probability. The number of episodes needed is shown to have linear dependence on the sizes of the state and action spaces and quadratic dependence on the time horizon and an upper bound on the number of possible successor states for a state-action pair. Therefore, if the upper bound on the number of possible successor states is much smaller than the size of the state space, the number of needed episodes becomes linear in the sizes of the state and action spaces and quadratic in the time horizon.

Downloads

Published

2021-05-18

How to Cite

Kalagarla, K. C., Jain, R., & Nuzzo, P. (2021). A Sample-Efficient Algorithm for Episodic Finite-Horizon MDP with Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8030-8037. https://doi.org/10.1609/aaai.v35i9.16979

Issue

Section

AAAI Technical Track on Machine Learning II