Mixing-Time Regularized Policy Gradient

Authors

  • Tetsuro Morimura IBM Research - Tokyo
  • Takayuki Osogami IBM Research - Tokyo
  • Tomoyuki Shirai Kyushu University

DOI:

https://doi.org/10.1609/aaai.v28i1.9013

Keywords:

reinforcement learning, Markov decision process, mixing time

Abstract

Policy gradient reinforcement learning (PGRL) has been receiving substantial attention as a mean for seeking stochastic policies that maximize cumulative reward. However, the learning speed of PGRL is known to decrease substantially when PGRL explores the policies that give the Markov chains having long mixing time. We study a new approach of regularizing how the PGRL explores the policies by the use of the hitting time of the Markov chains. The hitting time gives an upper bound on the mixing time, and the proposed approach improves the learning efficiency by keeping the mixing time of the Markov chains short. In particular, we propose a method of temporal-difference learning for estimating the gradient of the hitting time. Numerical experiments show that the proposed method outperforms conventional methods of PGRL.

Downloads

Published

2014-06-21

How to Cite

Morimura, T., Osogami, T., & Shirai, T. (2014). Mixing-Time Regularized Policy Gradient. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9013

Issue

Section

Main Track: Novel Machine Learning Algorithms