Generalizing Policy Advice with Gaussian Process Bandits for Dynamic Skill Improvement

Authors

  • Jared Glover MIT
  • Charlotte Zhu MIT

DOI:

https://doi.org/10.1609/aaai.v28i1.9059

Keywords:

Learning from Advice

Abstract

We present a ping-pong-playing robot that learns to improve its swings with human advice. Our method learns a reward function over the joint space of task and policy parameters T×P, so the robot can explore policy space more intelligently in a way that trades off exploration vs. exploitation to maximize the total cumulative reward over time. Multimodal stochastic polices can also easily be learned with this approach when the reward function is multimodal in the policy parameters. We extend the recently-developed Gaussian Process Bandit Optimization framework to include exploration-bias advice from human domain experts, using a novel algorithm called Exploration Bias with Directional Advice (EBDA).

Downloads

Published

2014-06-21

How to Cite

Glover, J., & Zhu, C. (2014). Generalizing Policy Advice with Gaussian Process Bandits for Dynamic Skill Improvement. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9059