Non-stationary Risk-Sensitive Reinforcement Learning: Near-Optimal Dynamic Regret, Adaptive Detection, and Separation Design

Authors

  • Yuhao Ding University of California - Berkeley
  • Ming Jin Virginia Tech
  • Javad Lavaei UC Berkeley

DOI:

https://doi.org/10.1609/aaai.v37i6.25901

Keywords:

ML: Reinforcement Learning Theory, ML: Reinforcement Learning Algorithms

Abstract

We study risk-sensitive reinforcement learning (RL) based on an entropic risk measure in episodic non-stationary Markov decision processes (MDPs). Both the reward functions and the state transition kernels are unknown and allowed to vary arbitrarily over time with a budget on their cumulative variations. When this variation budget is known a prior, we propose two restart-based algorithms, namely Restart-RSMB and Restart-RSQ, and establish their dynamic regrets. Based on these results, we further present a meta-algorithm that does not require any prior knowledge of the variation budget and can adaptively detect the non-stationarity on the exponential value functions. A dynamic regret lower bound is then established for non-stationary risk-sensitive RL to certify the near-optimality of the proposed algorithms. Our results also show that the risk control and the handling of the non-stationarity can be separately designed in the algorithm if the variation budget is known a prior, while the non-stationary detection mechanism in the adaptive algorithm depends on the risk parameter. This work offers the first non-asymptotic theoretical analyses for the non-stationary risk-sensitive RL in the literature.

Downloads

Published

2023-06-26

How to Cite

Ding, Y., Jin, M., & Lavaei, J. (2023). Non-stationary Risk-Sensitive Reinforcement Learning: Near-Optimal Dynamic Regret, Adaptive Detection, and Separation Design. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7405-7413. https://doi.org/10.1609/aaai.v37i6.25901

Issue

Section

AAAI Technical Track on Machine Learning I