Conceptual Reinforcement Learning for Language-Conditioned Tasks

Authors

  • Shaohui Peng SKL of Processors, Institute of Computing Technology, CAS; University of Chinese Academy of Sciences; Cambricon Technologies
  • Xing Hu SKL of Processors, Institute of Computing Technology, CAS
  • Rui Zhang SKL of Processors, Institute of Computing Technology, CAS; Cambricon Technologies
  • Jiaming Guo SKL of Processors, Institute of Computing Technology, CAS; University of Chinese Academy of Sciences; Cambricon Technologies
  • Qi Yi SKL of Processors, Institute of Computing Technology, CAS; Cambricon Technologies; University of Science and Technology of China
  • Ruizhi Chen University of Chinese Academy of Sciences; SKL of Computer Science, Institute of Software, CAS
  • Zidong Du SKL of Processors, Institute of Computing Technology, CAS; Cambricon Technologies
  • Ling Li University of Chinese Academy of Sciences; SKL of Computer Science, Institute of Software, CAS
  • Qi Guo SKL of Processors, Institute of Computing Technology, CAS
  • Yunji Chen SKL of Processors, Institute of Computing Technology, CAS; University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v37i8.26129

Keywords:

ML: Reinforcement Learning Algorithms, ML: Multimodal Learning, ML: Representation Learning, ML: Transfer, Domain Adaptation, Multi-Task Learning

Abstract

Despite the broad application of deep reinforcement learning (RL), transferring and adapting the policy to unseen but similar environments is still a significant challenge. Recently, the language-conditioned policy is proposed to facilitate policy transfer through learning the joint representation of observation and text that catches the compact and invariant information across various environments. Existing studies of language-conditioned RL methods often learn the joint representation as a simple latent layer for the given instances (episode-specific observation and text), which inevitably includes noisy or irrelevant information and cause spurious correlations that are dependent on instances, thus hurting generalization performance and training efficiency. To address the above issue, we propose a conceptual reinforcement learning (CRL) framework to learn the concept-like joint representation for language-conditioned policy. The key insight is that concepts are compact and invariant representations in human cognition through extracting similarities from numerous instances in real-world. In CRL, we propose a multi-level attention encoder and two mutual information constraints for learning compact and invariant concepts. Verified in two challenging environments, RTFM and Messenger, CRL significantly improves the training efficiency (up to 70%) and generalization ability (up to 30%) to the new environment dynamics.

Downloads

Published

2023-06-26

How to Cite

Peng, S., Hu, X., Zhang, R., Guo, J., Yi, Q., Chen, R., Du, Z., Li, L., Guo, Q., & Chen, Y. (2023). Conceptual Reinforcement Learning for Language-Conditioned Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9426-9434. https://doi.org/10.1609/aaai.v37i8.26129

Issue

Section

AAAI Technical Track on Machine Learning III