CUDC: A Curiosity-Driven Unsupervised Data Collection Method with Adaptive Temporal Distances for Offline Reinforcement Learning

Authors

  • Chenyu Sun Alibaba-NTU Singapore Joint Research Institute, Nanyang Technological University (NTU), Singapore School of Computer Science and Engineering, NTU, Singapore Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY), NTU, Singapore
  • Hangwei Qian Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR), Singapore
  • Chunyan Miao Alibaba-NTU Singapore Joint Research Institute, Nanyang Technological University (NTU), Singapore School of Computer Science and Engineering, NTU, Singapore Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY), NTU, Singapore

DOI:

https://doi.org/10.1609/aaai.v38i13.29437

Keywords:

ML: Reinforcement Learning, ML: Unsupervised & Self-Supervised Learning

Abstract

Offline reinforcement learning (RL) aims to learn an effective policy from a pre-collected dataset. Most existing works are to develop sophisticated learning algorithms, with less emphasis on improving the data collection process. Moreover, it is even challenging to extend the single-task setting and collect a task-agnostic dataset that allows an agent to perform multiple downstream tasks. In this paper, we propose a Curiosity-driven Unsupervised Data Collection (CUDC) method to expand feature space using adaptive temporal distances for task-agnostic data collection and ultimately improve learning efficiency and capabilities for multi-task offline RL. To achieve this, CUDC estimates the probability of the k-step future states being reachable from the current states, and adapts how many steps into the future that the dynamics model should predict. With this adaptive reachability mechanism in place, the feature representation can be diversified, and the agent can navigate itself to collect higher-quality data with curiosity. Empirically, CUDC surpasses existing unsupervised methods in efficiency and learning performance in various downstream offline RL tasks of the DeepMind control suite.

Published

2024-03-24

How to Cite

Sun, C., Qian, H., & Miao, C. (2024). CUDC: A Curiosity-Driven Unsupervised Data Collection Method with Adaptive Temporal Distances for Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 15145-15153. https://doi.org/10.1609/aaai.v38i13.29437

Issue

Section

AAAI Technical Track on Machine Learning IV