Dream to Generalize: Zero-Shot Model-Based Reinforcement Learning for Unseen Visual Distractions
DOI:
https://doi.org/10.1609/aaai.v37i6.25945Keywords:
ML: Reinforcement Learning Algorithms, ML: Representation Learning, ML: Unsupervised & Self-Supervised Learning, ROB: Learning & Optimization for ROBAbstract
Model-based reinforcement learning (MBRL) has been used to efficiently solve vision-based control tasks in high-dimensional image observations. Although recent MBRL algorithms perform well in trained observations, they fail when faced with visual distractions in observations. These task-irrelevant distractions (e.g., clouds, shadows, and light) may be constantly present in real-world scenarios. In this study, we propose a novel self-supervised method, Dream to Generalize (Dr. G), for zero-shot MBRL. Dr. G trains its encoder and world model with dual contrastive learning which efficiently captures task-relevant features among multi-view data augmentations. We also introduce a recurrent state inverse dynamics model that helps the world model to better understand the temporal structure. The proposed methods can enhance the robustness of the world model against visual distractions. To evaluate the generalization performance, we first train Dr. G on simple backgrounds and then test it on complex natural video backgrounds in the DeepMind Control suite, and the randomizing environments in Robosuite. Dr. G yields a performance improvement of 117% and 14% over prior works, respectively. Our code is open-sourced and available at https://github.com/JeongsooHa/DrG.gitDownloads
Published
2023-06-26
How to Cite
Ha, J., Kim, K., & Kim, Y. (2023). Dream to Generalize: Zero-Shot Model-Based Reinforcement Learning for Unseen Visual Distractions. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7802-7810. https://doi.org/10.1609/aaai.v37i6.25945
Issue
Section
AAAI Technical Track on Machine Learning I