Extending Policy Shaping to Continuous State Spaces (Student Abstract)

Authors

  • Thomas Wei The University of Texas at Austin
  • Taylor A. Kessler Faulkner The University of Texas at Austin
  • Andrea L. Thomaz The University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v35i18.17956

Keywords:

Reinforcement Learning, Human Robot Interaction, Machine Learning

Abstract

Policy Shaping is a Human-in-the-loop Reinforcement Learning (HRL) algorithm. We extend this work to continuous states with our algorithm, Deep Policy Shaping (DPS). DPS uses a feedback neural network that learns the optimality of actions from noisy feedback combined with an RL algorithm. In simulation, we find that DPS outperforms or matches baselines averaged over multiple hyperparameter settings and varying feedback correctness.

Downloads

Published

2021-05-18

How to Cite

Wei, T., Faulkner, T. A. K., & Thomaz, A. L. (2021). Extending Policy Shaping to Continuous State Spaces (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15919-15920. https://doi.org/10.1609/aaai.v35i18.17956

Issue

Section

AAAI Student Abstract and Poster Program