Efficient Average Reward Reinforcement Learning Using Constant Shifting Values

Authors

  • Shangdong Yang Nanjing University
  • Yang Gao Nanjing University
  • Bo An Nanyang Technological University
  • Hao Wang Nanjing University
  • Xingguo Chen Nanjing University of Posts and Telecommunications

DOI:

https://doi.org/10.1609/aaai.v30i1.10285

Keywords:

Reinforcement Learning, Average Reward, Constant Shifting Value

Abstract

There are two classes of average reward reinforcement learning (RL) algorithms: model-based ones that explicitly maintain MDP models and model-free ones that do not learn such models. Though model-free algorithms are known to be more efficient, they often cannot converge to optimal policies due to the perturbation of parameters. In this paper, a novel model-free algorithm is proposed, which makes use of constant shifting values (CSVs) estimated from prior knowledge. To encourage exploration during the learning process, the algorithm constantly subtracts the CSV from the rewards. A terminating condition is proposed to handle the unboundedness of Q-values caused by such substraction. The convergence of the proposed algorithm is proved under very mild assumptions. Furthermore, linear function approximation is investigated to generalize our method to handle large-scale tasks. Extensive experiments on representative MDPs and the popular game Tetris show that the proposed algorithms significantly outperform the state-of-the-art ones.

Downloads

Published

2016-03-02

How to Cite

Yang, S., Gao, Y., An, B., Wang, H., & Chen, X. (2016). Efficient Average Reward Reinforcement Learning Using Constant Shifting Values. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10285

Issue

Section

Technical Papers: Machine Learning Methods