Metric Residual Network for Sample Efficient Goal-Conditioned Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v37i7.26058Keywords:
ML: Deep Neural Architectures, ML: Reinforcement Learning AlgorithmsAbstract
Goal-conditioned reinforcement learning (GCRL) has a wide range of potential real-world applications, including manipulation and navigation problems in robotics. Especially in such robotics tasks, sample efficiency is of the utmost importance for GCRL since, by default, the agent is only rewarded when it reaches its goal. While several methods have been proposed to improve the sample efficiency of GCRL, one relatively under-studied approach is the design of neural architectures to support sample efficiency. In this work, we introduce a novel neural architecture for GCRL that achieves significantly better sample efficiency than the commonly-used monolithic network architecture. The key insight is that the optimal action-value function must satisfy the triangle inequality in a specific sense. Furthermore, we introduce the metric residual network (MRN) that deliberately decomposes the action-value function into the negated summation of a metric plus a residual asymmetric component. MRN provably approximates any optimal action-value function, thus making it a fitting neural architecture for GCRL. We conduct comprehensive experiments across 12 standard benchmark environments in GCRL. The empirical results demonstrate that MRN uniformly outperforms other state-of-the-art GCRL neural architectures in terms of sample efficiency. The code is available at https://github.com/Cranial-XIX/metric-residual-network.Downloads
Published
2023-06-26
How to Cite
Liu, B., Feng, Y., Liu, Q., & Stone, P. (2023). Metric Residual Network for Sample Efficient Goal-Conditioned Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8799-8806. https://doi.org/10.1609/aaai.v37i7.26058
Issue
Section
AAAI Technical Track on Machine Learning II