Meta-Reinforcement Learning Based on Self-Supervised Task Representation Learning

Authors

  • Mingyang Wang Technical University Munich
  • Zhenshan Bing Technical University Munich
  • Xiangtong Yao Technical University Munich
  • Shuai Wang Tencent Robotics X Lab
  • Huang Kai Sun Yat-Sen University
  • Hang Su Politecnico di Milano
  • Chenguang Yang University of the West of England
  • Alois Knoll Technical University Munich

DOI:

https://doi.org/10.1609/aaai.v37i8.26210

Keywords:

ML: Reinforcement Learning Algorithms, ROB: Behavior Learning & Control, ML: Meta Learning, ML: Unsupervised & Self-Supervised Learning

Abstract

Meta-reinforcement learning enables artificial agents to learn from related training tasks and adapt to new tasks efficiently with minimal interaction data. However, most existing research is still limited to narrow task distributions that are parametric and stationary, and does not consider out-of-distribution tasks during the evaluation, thus, restricting its application. In this paper, we propose MoSS, a context-based Meta-reinforcement learning algorithm based on Self-Supervised task representation learning to address this challenge. We extend meta-RL to broad non-parametric task distributions which have never been explored before, and also achieve state-of-the-art results in non-stationary and out-of-distribution tasks. Specifically, MoSS consists of a task inference module and a policy module. We utilize the Gaussian mixture model for task representation to imitate the parametric and non-parametric task variations. Additionally, our online adaptation strategy enables the agent to react at the first sight of a task change, thus being applicable in non-stationary tasks. MoSS also exhibits strong generalization robustness in out-of-distributions tasks which benefits from the reliable and robust task representation. The policy is built on top of an off-policy RL algorithm and the entire network is trained completely off-policy to ensure high sample efficiency. On MuJoCo and Meta-World benchmarks, MoSS outperforms prior works in terms of asymptotic performance, sample efficiency (3-50x faster), adaptation efficiency, and generalization robustness on broad and diverse task distributions.

Downloads

Published

2023-06-26

How to Cite

Wang, M., Bing, Z., Yao, X., Wang, S., Kai, H., Su, H., Yang, C., & Knoll, A. (2023). Meta-Reinforcement Learning Based on Self-Supervised Task Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 10157-10165. https://doi.org/10.1609/aaai.v37i8.26210

Issue

Section

AAAI Technical Track on Machine Learning III