Where to Add Actions in Human-in-the-Loop Reinforcement Learning

Authors

  • Travis Mandel University of Washington
  • Yun-En Liu Enlearn
  • Emma Brunskill Carnegie Mellon University
  • Zoran Popović University of Washington

DOI:

https://doi.org/10.1609/aaai.v31i1.10945

Keywords:

Human-in-the-Loop AI, MDPs, Exploration, Human-Aware AI

Abstract

In order for reinforcement learning systems to learn quickly in vast action spaces such as the space of all possible pieces of text or the space of all images, leveraging human intuition and creativity is key. However, a human-designed action space is likely to be initially imperfect and limited; furthermore, humans may improve at creating useful actions with practice or new information. Therefore, we propose a framework in which a human adds actions to a reinforcement learning system over time to boost performance. In this setting, however, it is key that we use human effort as efficiently as possible, and one significant danger is that humans waste effort adding actions at places (states) that aren't very important. Therefore, we propose Expected Local Improvement (ELI), an automated method which selects states at which to query humans for a new action. We evaluate ELI on a variety of simulated domains adapted from the literature, including domains with over a million actions and domains where the simulated experts change over time. We find ELI demonstrates excellent empirical performance, even in settings where the synthetic "experts" are quite poor.

Downloads

Published

2017-02-13

How to Cite

Mandel, T., Liu, Y.-E., Brunskill, E., & Popović, Z. (2017). Where to Add Actions in Human-in-the-Loop Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10945