A Transfer Approach Using Graph Neural Networks in Deep Reinforcement Learning

Authors

  • Tianpei Yang College of Intelligence and Computing, Tianjin University University of Alberta
  • Heng You College of Intelligence and Computing, Tianjin University
  • Jianye Hao College of Intelligence and Computing, Tianjin University
  • Yan Zheng College of Intelligence and Computing, Tianjin University
  • Matthew E. Taylor University of Alberta Alberta Machine Intelligence Institute (Amii)

DOI:

https://doi.org/10.1609/aaai.v38i15.29571

Keywords:

ML: Reinforcement Learning, ROB: Learning & Optimization for ROB, ML: Transfer, Domain Adaptation, Multi-Task Learning, ML: Deep Neural Architectures and Foundation Models

Abstract

Transfer learning (TL) has shown great potential to improve Reinforcement Learning (RL) efficiency by leveraging prior knowledge in new tasks. However, much of the existing TL research focuses on transferring knowledge between tasks that share the same state-action spaces. Further, transfer from multiple source tasks that have different state-action spaces is more challenging and needs to be solved urgently to improve the generalization and practicality of the method in real-world scenarios. This paper proposes TURRET (Transfer Using gRaph neuRal nETworks), to utilize the generalization capabilities of Graph Neural Networks (GNNs) to facilitate efficient and effective multi-source policy transfer learning in the state-action mismatch setting. TURRET learns a semantic representation by accounting for the intrinsic property of the agent through GNNs, which leads to a unified state embedding space for all tasks. As a result, TURRET achieves more efficient transfer with strong generalization ability between different tasks and can be easily combined with existing Deep RL algorithms. Experimental results show that TURRET significantly outperforms other TL methods on multiple continuous action control tasks, successfully transferring across robots with different state-action spaces.

Published

2024-03-24

How to Cite

Yang, T., You, H., Hao, J., Zheng, Y., & Taylor, M. E. (2024). A Transfer Approach Using Graph Neural Networks in Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16352-16360. https://doi.org/10.1609/aaai.v38i15.29571

Issue

Section

AAAI Technical Track on Machine Learning VI