Reuse of Neural Modules for General Video Game Playing

Authors

  • Alexander Braylan The University of Texas at Austin
  • Mark Hollenbeck The University of Texas at Austin
  • Elliot Meyerson The University of Texas at Austin
  • Risto Miikkulainen The University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v30i1.10014

Keywords:

Game Playing and Interactive Entertainment, General Game Playing, Neural Networks, Evolutionary Computation, Transfer Learning, Neural Reuse, Reinforcement Learning, General Video Game Playing, Neuroevolution, Knowledge Transfer

Abstract

A general approach to knowledge transfer is introduced in which an agent controlled by a neural network adapts how it reuses existing networks as it learns in a new domain. Networks trained for a new domain can improve their performance by routing activation selectively through previously learned neural structure, regardless of how or for what it was learned. A neuroevolution implementation of this approach is presented with application to high-dimensional sequential decision-making domains. This approach is more general than previous approaches to neural transfer for reinforcement learning. It is domain-agnostic and requires no prior assumptions about the nature of task relatedness or mappings. The method is analyzed in a stochastic version of the Arcade Learning Environment, demonstrating that it improves performance in some of the more complex Atari 2600 games, and that the success of transfer can be predicted based on a high-level characterization of game dynamics.

Downloads

Published

2016-02-21

How to Cite

Braylan, A., Hollenbeck, M., Meyerson, E., & Miikkulainen, R. (2016). Reuse of Neural Modules for General Video Game Playing. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10014

Issue

Section

Technical Papers: Game Playing and Interactive Entertainment