TY - JOUR AU - Lobel, Sam AU - Gottesman, Omer AU - Allen, Cameron AU - Bagaria, Akhil AU - Konidaris, George PY - 2022/06/28 Y2 - 2024/03/28 TI - Optimistic Initialization for Exploration in Continuous Control JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 7 SE - AAAI Technical Track on Machine Learning II DO - 10.1609/aaai.v36i7.20727 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20727 SP - 7612-7619 AB - Optimistic initialization underpins many theoretically sound exploration schemes in tabular domains; however, in the deep function approximation setting, optimism can quickly disappear if initialized naively. We propose a framework for more effectively incorporating optimistic initialization into reinforcement learning for continuous control. Our approach uses metric information about the state-action space to estimate which transitions are still unexplored, and explicitly maintains the initial Q-value optimism for the corresponding state-action pairs. We also develop methods for efficiently approximating these training objectives, and for incorporating domain knowledge into the optimistic envelope to improve sample efficiency. We empirically evaluate these approaches on a variety of hard exploration problems in continuous control, where our method outperforms existing exploration techniques. ER -