Retaining Learned Behavior During Real-Time Neuroevolution


  • Thomas D’Silva University of Texas at Austin
  • Roy Janik University of Texas at Austin
  • Michael Chrien University of Texas at Austin
  • Kenneth O. Stanley University of Texas at Austin
  • Risto Miikkulainen University of Texas at Austin



Creating software-controlled agents in videogames who can learn and adapt to player behavior is a difficult task. Using the real-time NeuroEvolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real-time has been shown to be an effective way of achieving behaviors beyond simple scripted character behavior. In NERO, a videogame built to showcase the features of rtNEAT, agents are trained in various tasks, including shooting enemies, avoiding enemies, and navigating around obstacles. Training the neural networks to perform a series of distinct tasks can be problematic: the longer they train in a new task, the more likely it is that they will forget their skills. This paper investigates a technique for increasing the probability that a population will remember old skills as they learn new ones. By setting aside the most fit individuals at a time when a skill has been learned and then occasionally introducing their offspring into the population, the skill is retained. How large to make this milestone pool of individuals and how often to insert the offspring of the milestone pool into the general population is the primary focus of this paper.




How to Cite

D’Silva, T., Janik, R., Chrien, M., Stanley, K., & Miikkulainen, R. (2021). Retaining Learned Behavior During Real-Time Neuroevolution. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 1(1), 39-44.