Adversarial Goal Generation for Intrinsic Motivation

Authors

  • Ishan Durugkar University of Texas at Austin
  • Peter Stone University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v32i1.12195

Keywords:

Reinforcement Learning, Deep Learning, Curriculum Learning, Intrinsic Motivation

Abstract

Generally in Reinforcement Learning the goal, or reward signal, is given by the environment and cannot be controlled by the agent. We propose to introduce an intrinsic motivation module that will select a reward function for the agent to learn to achieve. We will use a Universal Value Function Approximator, that takes as input both the state and the parameters of this reward function as the goal to predict the value function (or action-value function) to generalize across these goals. This module will be trained to generate goals such that the agent's learning is maximized. Thus, this is also a method for automatic curriculum learning.

Downloads

Published

2018-04-29

How to Cite

Durugkar, I., & Stone, P. (2018). Adversarial Goal Generation for Intrinsic Motivation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12195