Imitation Learning with Demonstrations and Shaping Rewards

Authors

  • Kshitij Judah Oregon State University
  • Alan Fern Oregon State University
  • Prasad Tadepalli Oregon State University
  • Robby Goetschalckx Oregon State University

DOI:

https://doi.org/10.1609/aaai.v28i1.9024

Keywords:

Sequential Decision Making, Imitation Learning, Reinforcement Learning, Reward Shaping

Abstract

Imitation Learning (IL) is a popular approach for teaching behavior policies to agents by demonstrating the desired target policy. While the approach has lead to many successes, IL often requires a large set of demonstrations to achieve robust learning, which can be expensive for the teacher. In this paper, we consider a novel approach to improve the learning efficiency of IL by providing a shaping reward function in addition to the usual demonstrations. Shaping rewards are numeric functions of states (and possibly actions) that are generally easily specified, and capture general principles of desired behavior, without necessarily completely specifying the behavior. Shaping rewards have been used extensively in reinforcement learning, but have been seldom considered for IL, though they are often easy to specify. Our main contribution is to propose an IL approach that learns from both shaping rewards and demonstrations. We demonstrate the effectiveness of the approach across several IL problems, even when the shaping reward is not fully consistent with the demonstrations.

Downloads

Published

2014-06-21

How to Cite

Judah, K., Fern, A., Tadepalli, P., & Goetschalckx, R. (2014). Imitation Learning with Demonstrations and Shaping Rewards. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9024

Issue

Section

Main Track: Novel Machine Learning Algorithms