Planning from Pixels in Atari with Learned Symbolic Representations

Authors

  • Andrea Dittadi Technical University of Denmark
  • Frederik K. Drachmann Technical University of Denmark
  • Thomas Bolander Technical University of Denmark

DOI:

https://doi.org/10.1609/aaai.v35i6.16627

Keywords:

Neuro-Symbolic AI (NSAI)

Abstract

Width-based planning methods have been shown to yield state-of-the-art performance in the Atari 2600 domain using pixel input. One successful approach, RolloutIW, represents states with the B-PROST boolean feature set. An augmented version of RolloutIW, pi-IW, shows that learned features can be competitive with handcrafted ones for width-based search. In this paper, we leverage variational autoencoders (VAEs) to learn features directly from pixels in a principled manner, and without supervision. The inference model of the trained VAEs extracts boolean features from pixels, and RolloutIW plans with these features. The resulting combination outperforms the original RolloutIW and human professional play on Atari 2600 and drastically reduces the size of the feature set.

Downloads

Published

2021-05-18

How to Cite

Dittadi, A., Drachmann, F. K., & Bolander, T. (2021). Planning from Pixels in Atari with Learned Symbolic Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(6), 4941-4949. https://doi.org/10.1609/aaai.v35i6.16627

Issue

Section

AAAI Technical Track Focus Area on Neuro-Symbolic AI