Enhancing Context-Based Meta-Reinforcement Learning Algorithms via An Efficient Task Encoder (Student Abstract)
Keywords:Meta Learning, Reinforcement Learning, Representation Learning
AbstractMeta-Reinforcement Learning (meta-RL) algorithms enable agents to adapt to new tasks from small amounts of exploration, based on the experience of similar tasks. Recent studies have pointed out that a good representation of a task is key to the success of off-policy context-based meta-RL. Inspired by contrastive methods in unsupervised representation learning, we propose a new method to learn the task representation based on the mutual information between transition tuples in a trajectory and the task embedding. We also propose a new estimation for task similarity based on Q-function, which can be used to form a constraint on the distribution of the encoded task variables, making the task encoder encode the task variables more effective on new tasks. Experiments on meta-RL tasks show that the newly proposed method outperforms existing meta-RL algorithms.
How to Cite
Xu, F., Jiang, S., Yin, H., Zhang, Z., Yu, Y., Li, M., Li, D., & Liu, W. (2021). Enhancing Context-Based Meta-Reinforcement Learning Algorithms via An Efficient Task Encoder (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15937-15938. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17965
AAAI Student Abstract and Poster Program