Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning

Authors

  • Jinxin Liu School of Engineering, Westlake University
  • Ziqi Zhang School of Engineering, Westlake University
  • Zhenyu Wei School of Engineering, Westlake university
  • Zifeng Zhuang School of Engineering, Westlake University
  • Yachen Kang School of Engineering, Westlake University
  • Sibo Gai School of Engineering, Westlake University
  • Donglin Wang School of Engineering, Westlake University

DOI:

https://doi.org/10.1609/aaai.v38i12.29302

Keywords:

ML: Reinforcement Learning, ML: Transfer, Domain Adaptation, Multi-Task Learning

Abstract

Offline reinforcement learning (RL) aims to learn a policy using only pre-collected and fixed data. Although avoiding the time-consuming online interactions in RL, it poses challenges for out-of-distribution (OOD) state actions and often suffers from data inefficiency for training. Despite many efforts being devoted to addressing OOD state actions, the latter (data inefficiency) receives little attention in offline RL. To address this, this paper proposes the cross-domain offline RL, which assumes offline data incorporate additional source-domain data from varying transition dynamics (environments), and expects it to contribute to the offline data efficiency. To do so, we identify a new challenge of OOD transition dynamics, beyond the common OOD state actions issue, when utilizing cross-domain offline data. Then, we propose our method BOSA, which employs two support-constrained objectives to address the above OOD issues. Through extensive experiments in the cross-domain offline RL setting, we demonstrate BOSA can greatly improve offline data efficiency: using only 10% of the target data, BOSA could achieve 74.4% of the SOTA offline RL performance that uses 100% of the target data. Additionally, we also show BOSA can be effortlessly plugged into model-based offline RL and noising data augmentation techniques (used for generating source-domain data), which naturally avoids the potential dynamics mismatch between target-domain data and newly generated source-domain data.

Published

2024-03-24

How to Cite

Liu, J., Zhang, Z., Wei, Z., Zhuang, Z., Kang, Y., Gai, S., & Wang, D. (2024). Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13945-13953. https://doi.org/10.1609/aaai.v38i12.29302

Issue

Section

AAAI Technical Track on Machine Learning III