Fast and Data Efficient Reinforcement Learning from Pixels via Non-parametric Value Approximation

Authors

  • Alexander Long University of New South Wales
  • Alan Blair University of New South Wales
  • Herke van Hoof University of Amsterdam

DOI:

https://doi.org/10.1609/aaai.v36i7.20728

Keywords:

Machine Learning (ML)

Abstract

We present Nonparametric Approximation of Inter-Trace returns (NAIT), a Reinforcement Learning algorithm for discrete action, pixel-based environments that is both highly sample and computation efficient. NAIT is a lazy-learning approach with an update that is equivalent to episodic Monte-Carlo on episode completion, but that allows the stable incorporation of rewards while an episode is ongoing. We make use of a fixed domain-agnostic representation, simple distance based exploration and a proximity graph-based lookup to facilitate extremely fast execution. We empirically evaluate NAIT on both the 26 and 57 game variants of ATARI100k where, despite its simplicity, it achieves competitive performance in the online setting with greater than 100x speedup in wall-time.

Downloads

Published

2022-06-28

How to Cite

Long, A., Blair, A., & Hoof, H. van. (2022). Fast and Data Efficient Reinforcement Learning from Pixels via Non-parametric Value Approximation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7620-7627. https://doi.org/10.1609/aaai.v36i7.20728

Issue

Section

AAAI Technical Track on Machine Learning II