TY - JOUR AU - Jiang, Jiechuan AU - Lu, Zongqing PY - 2020/04/03 Y2 - 2024/03/28 TI - Generative Exploration and Exploitation JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 04 SE - AAAI Technical Track: Machine Learning DO - 10.1609/aaai.v34i04.5858 UR - https://ojs.aaai.org/index.php/AAAI/article/view/5858 SP - 4337-4344 AB - <p>Sparse reward is one of the biggest challenges in reinforcement learning (RL). In this paper, we propose a novel method called <em>Generative Exploration and Exploitation</em> (GENE) to overcome sparse reward. GENE automatically generates start states to encourage the agent to explore the environment and to exploit received reward signals. GENE can adaptively tradeoff between exploration and exploitation according to the varying distributions of states experienced by the agent as the learning progresses. GENE relies on no prior knowledge about the environment and can be combined with any RL algorithm, no matter on-policy or off-policy, single-agent or multi-agent. Empirically, we demonstrate that GENE significantly outperforms existing methods in three tasks with only binary rewards, including Maze, Maze Ant, and Cooperative Navigation. Ablation studies verify the emergence of progressive exploration and automatic reversing.</p> ER -