Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural Networks

Authors

  • Fabio Pardo Imperial College London
  • Vitaly Levdik Imperial College London
  • Petar Kormushev Imperial College London

DOI:

https://doi.org/10.1609/aaai.v34i04.5983

Abstract

Being able to reach any desired location in the environment can be a valuable asset for an agent. Learning a policy to navigate between all pairs of states individually is often not feasible. An all-goals updating algorithm uses each transition to learn Q-values towards all goals simultaneously and off-policy. However the expensive numerous updates in parallel limited the approach to small tabular cases so far. To tackle this problem we propose to use convolutional network architectures to generate Q-values and updates for a large number of goals at once. We demonstrate the accuracy and generalization qualities of the proposed method on randomly generated mazes and Sokoban puzzles. In the case of on-screen goal coordinates the resulting mapping from frames to distance-maps directly informs the agent about which places are reachable and in how many steps. As an example of application we show that replacing the random actions in ε-greedy exploration by several actions towards feasible goals generates better exploratory trajectories on Montezuma's Revenge and Super Mario All-Stars games.

Downloads

Published

2020-04-03

How to Cite

Pardo, F., Levdik, V., & Kormushev, P. (2020). Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5355-5362. https://doi.org/10.1609/aaai.v34i04.5983

Issue

Section

AAAI Technical Track: Machine Learning