V-MIN: Efficient Reinforcement Learning through Demonstrations and Relaxed Reward Demands

Authors

  • David Martínez Institut de Robòtica i Informàtica Industrial (CSIC-UPC)
  • Guillem Alenyà Institut de Robòtica i Informàtica Industrial (CSIC-UPC)
  • Carme Torras Institut de Robòtica i Informàtica Industrial (CSIC-UPC)

DOI:

https://doi.org/10.1609/aaai.v29i1.9596

Keywords:

Model-based Reinforcement Learning, Teacher Demonstrations, Relational Learning, Active Learning

Abstract

Reinforcement learning (RL) is a common paradigm for learning tasks in robotics. However, a lot of exploration is usually required, making RL too slow for high-level tasks. We present V-MIN, an algorithm that integrates teacher demonstrations with RL to learn complex tasks faster. The algorithm combines active demonstration requests and autonomous exploration to find policies yielding rewards higher than a given threshold Vmin. This threshold sets the degree of quality with which the robot is expected to complete the task, thus allowing the user to either opt for very good policies that require many learning experiences, or to be more permissive with sub-optimal policies that are easier to learn. The threshold can also be increased online to force the system to improve its policies until the desired behavior is obtained. Furthermore, the algorithm generalizes previously learned knowledge, adapting well to changes. The performance of V-MIN has been validated through experimentation, including domains from the international planning competition. Our approach achieves the desired behavior where previous algorithms failed.

Downloads

Published

2015-02-21

How to Cite

Martínez, D., Alenyà, G., & Torras, C. (2015). V-MIN: Efficient Reinforcement Learning through Demonstrations and Relaxed Reward Demands. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9596

Issue

Section

Main Track: Novel Machine Learning Algorithms