TY - JOUR
AU - Mandal, Debmalya
AU - Radanovic, Goran
AU - Gan, Jiarui
AU - Singla, Adish
AU - Majumdar, Rupak
PY - 2023/06/26
Y2 - 2024/09/14
TI - Online Reinforcement Learning with Uncertain Episode Lengths
JF - Proceedings of the AAAI Conference on Artificial Intelligence
JA - AAAI
VL - 37
IS - 7
SE - AAAI Technical Track on Machine Learning II
DO - 10.1609/aaai.v37i7.26088
UR - https://ojs.aaai.org/index.php/AAAI/article/view/26088
SP - 9064-9071
AB - Existing episodic reinforcement algorithms assume that the length of an episode is fixed across time and known a priori. In this paper, we consider a general framework of episodic reinforcement learning when the length of each episode is drawn from a distribution. We first establish that this problem is equivalent to online reinforcement learning with general discounting where the learner is trying to optimize the expected discounted sum of rewards over an infinite horizon, but where the discounting function is not necessarily geometric. We show that minimizing regret with this new general discounting is equivalent to minimizing regret with uncertain episode lengths. We then design a reinforcement learning algorithm that minimizes regret with general discounting but acts for the setting with uncertain episode lengths. We instantiate our general bound for different types of discounting, including geometric and polynomial discounting. We also show that we can obtain similar regret bounds even when the uncertainty over the episode lengths is unknown, by estimating the unknown distribution over time. Finally, we compare our learning algorithms with existing value-iteration based episodic RL algorithms on a grid-world environment.
ER -