Programmatic Reward Design by Example

Authors

  • Weichao Zhou Boston University
  • Wenchao Li Boston University

DOI:

https://doi.org/10.1609/aaai.v36i8.20910

Keywords:

Machine Learning (ML)

Abstract

Reward design is a fundamental problem in reinforcement learning (RL). A misspecified or poorly designed reward can result in low sample efficiency and undesired behaviors. In this paper, we propose the idea of programmatic reward design, i.e. using programs to specify the reward functions in RL environments. Programs allow human engineers to express sub-goals and complex task scenarios in a structured and interpretable way. The challenge of programmatic reward design, however, is that while humans can provide the high-level structures, properly setting the low-level details, such as the right amount of reward for a specific sub-task, remains difficult. A major contribution of this paper is a probabilistic framework that can infer the best candidate programmatic reward function from expert demonstrations. Inspired by recent generative-adversarial approaches, our framework searches for themost likely programmatic reward function under whichthe optimally generated trajectories cannot be differen-tiated from the demonstrated trajectories. Experimental results show that programmatic reward functions learned using this framework can significantly outperform those learned using existing reward learning algorithms, and enable RL agents to achieve state-of-the-art performance on highly complex tasks.

Downloads

Published

2022-06-28

How to Cite

Zhou, W., & Li, W. (2022). Programmatic Reward Design by Example. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9233-9241. https://doi.org/10.1609/aaai.v36i8.20910

Issue

Section

AAAI Technical Track on Machine Learning III