Visual Transfer For Reinforcement Learning Via Wasserstein Domain Confusion
DOI:
https://doi.org/10.1609/aaai.v35i11.17139Keywords:
Reinforcement Learning, Transfer/Adaptation/Multi-task/Meta/Automated LearningAbstract
We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for visual transfer in Reinforcement Learning that explicitly learns to align the distributions of extracted features between a source and target task. WAPPO approximates and minimizes the Wasserstein-1 distance between the distributions of features from source and target domains via a novel Wasserstein Confusion objective. WAPPO outperforms the prior state-of-the-art in visual transfer and successfully transfers policies across Visual Cartpole and both the easy and hard settings of of 16 OpenAI Procgen environments.Downloads
Published
2021-05-18
How to Cite
Roy, J., & Konidaris, G. D. (2021). Visual Transfer For Reinforcement Learning Via Wasserstein Domain Confusion. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9454-9462. https://doi.org/10.1609/aaai.v35i11.17139
Issue
Section
AAAI Technical Track on Machine Learning IV