How Should an Agent Practice?

Authors

  • Janarthanan Rajendran University of Michigan
  • Richard Lewis University of Michigan
  • Vivek Veeriah University of Michigan
  • Honglak Lee University of Michigan
  • Satinder Singh University of Michigan

DOI:

https://doi.org/10.1609/aaai.v34i04.5995

Abstract

We present a method for learning intrinsic reward functions to drive the learning of an agent during periods of practice in which extrinsic task rewards are not available. During practice, the environment may differ from the one available for training and evaluation with extrinsic rewards. We refer to this setup of alternating periods of practice and objective evaluation as practice-match, drawing an analogy to regimes of skill acquisition common for humans in sports and games. The agent must effectively use periods in the practice environment so that performance improves during matches. In the proposed method the intrinsic practice reward is learned through a meta-gradient approach that adapts the practice reward parameters to reduce the extrinsic match reward loss computed from matches. We illustrate the method on a simple grid world, and evaluate it in two games in which the practice environment differs from match: Pong with practice against a wall without an opponent, and PacMan with practice in a maze without ghosts. The results show gains from learning in practice in addition to match periods over learning in matches only.

Downloads

Published

2020-04-03

How to Cite

Rajendran, J., Lewis, R., Veeriah, V., Lee, H., & Singh, S. (2020). How Should an Agent Practice?. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5454-5461. https://doi.org/10.1609/aaai.v34i04.5995

Issue

Section

AAAI Technical Track: Machine Learning