Episodic Policy Gradient Training

Authors

  • Hung Le Deakin University
  • Majid Abdolshah Deakin University
  • Thommen K. George Deakin University
  • Kien Do Deakin Unviersity
  • Dung Nguyen Deakin University
  • Svetha Venkatesh Deakin University

DOI:

https://doi.org/10.1609/aaai.v36i7.20694

Keywords:

Machine Learning (ML), Search And Optimization (SO)

Abstract

We introduce a novel training procedure for policy gradient methods wherein episodic memory is used to optimize the hyperparameters of reinforcement learning algorithms on-the-fly. Unlike other hyperparameter searches, we formulate hyperparameter scheduling as a standard Markov Decision Process and use episodic memory to store the outcome of used hyperparameters and their training contexts. At any policy update step, the policy learner refers to the stored experiences, and adaptively reconfigures its learning algorithm with the new hyperparameters determined by the memory. This mechanism, dubbed as Episodic Policy Gradient Training (EPGT), enables an episodic learning process, and jointly learns the policy and the learning algorithm's hyperparameters within a single run. Experimental results on both continuous and discrete environments demonstrate the advantage of using the proposed method in boosting the performance of various policy gradient algorithms.

Downloads

Published

2022-06-28

How to Cite

Le, H., Abdolshah, M., George, T. K., Do, K., Nguyen, D., & Venkatesh, S. (2022). Episodic Policy Gradient Training. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7317-7325. https://doi.org/10.1609/aaai.v36i7.20694

Issue

Section

AAAI Technical Track on Machine Learning II