Toward Robust Long Range Policy Transfer

Authors

  • Wei-Cheng Tseng National Tsing Hua University
  • Jin-Siang Lin National Tsing Hua University
  • Yao-Min Feng National Tsing Hua University
  • Min Sun National Tsing Hua University Appier Inc., Taiwan MOST Joint Research Center for AI Technology and All Vista Healthcare, Taiwan

Keywords:

Reinforcement Learning, Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

Humans can master a new task within a few trials by drawing upon skills acquired through prior experience. To mimic this capability, hierarchical models combining primitive policies learned from prior tasks have been proposed. However, these methods fall short comparing to the human's range of transferability. We propose a method, which leverages the hierarchical structure to train the combination function and adapt the set of diverse primitive polices alternatively, to efficiently produce a range of complex behaviors on challenging new tasks. We also design two regularization terms to improve the diversity and utilization rate of the primitives in the pre-training phase. We demonstrate that our method outperforms other recent policy transfer methods by combining and adapting these reusable primitives in tasks with continuous action space. The experiment results further show that our approach provides a broader transferring range. The ablation study also show the regularization terms are critical for long range policy transfer. Finally, we show that our method consistently outperforms other methods when the quality of the primitives varies.

Downloads

Published

2021-05-18

How to Cite

Tseng, W.-C., Lin, J.-S., Feng, Y.-M., & Sun, M. (2021). Toward Robust Long Range Policy Transfer. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9958-9966. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17196

Issue

Section

AAAI Technical Track on Machine Learning IV