TY - JOUR AU - Dabney, Will AU - Barreto, André AU - Rowland, Mark AU - Dadashi, Robert AU - Quan, John AU - G. Bellemare, Marc AU - Silver, David PY - 2021/05/18 Y2 - 2024/03/28 TI - The Value-Improvement Path: Towards Better Representations for Reinforcement Learning JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 8 SE - AAAI Technical Track on Machine Learning I DO - 10.1609/aaai.v35i8.16880 UR - https://ojs.aaai.org/index.php/AAAI/article/view/16880 SP - 7160-7168 AB - In value-based reinforcement learning (RL), unlike in supervised learning, the agent faces not a single, stationary, approximation problem, but a sequence of value prediction problems. Each time the policy improves, the nature of the problem changes, shifting both the distribution of states and their values. In this paper we take a novel perspective, arguing that the value prediction problems faced by an RL agent should not be addressed in isolation, but rather as a single, holistic, prediction problem. An RL algorithm generates a sequence of policies that, at least approximately, improve towards the optimal policy. We explicitly characterize the associated sequence of value functions and call it the value-improvement path. Our main idea is to approximate the value-improvement path holistically, rather than to solely track the value function of the current policy. Specifically, we discuss the impact that this holistic view of RL has on representation learning. We demonstrate that a representation that spans the past value-improvement path will also provide an accurate value approximation for future policy improvements. We use this insight to better understand existing approaches to auxiliary tasks and to propose new ones. To test our hypothesis empirically, we augmented a standard deep RL agent with an auxiliary task of learning the value-improvement path. In a study of Atari 2600 games, the augmented agent achieved approximately double the mean and median performance of the baseline agent. ER -