Providing Uncertainty-Based Advice for Deep Reinforcement Learning Agents (Student Abstract)

Authors

  • Felipe Leno Da Silva University of São Paulo
  • Pablo Hernandez-Leal Borealis AI
  • Bilal Kartal Borealis AI
  • Matthew E. Taylor Borealis AI

DOI:

https://doi.org/10.1609/aaai.v34i10.7229

Abstract

The sample-complexity of Reinforcement Learning (RL) techniques still represents a challenge for scaling up RL to unsolved domains. One way to alleviate this problem is to leverage samples from the policy of a demonstrator to learn faster. However, advice is normally limited, hence advice should ideally be directed to states where the agent is uncertain on the best action to be applied. In this work, we propose Requesting Confidence-Moderated Policy advice (RCMP), an action-advising framework where the agent asks for advice when its uncertainty is high. We describe a technique to estimate the agent uncertainty with minor modifications in standard value-based RL methods. RCMP is shown to perform better than several baselines in the Atari Pong domain.

Downloads

Published

2020-04-03

How to Cite

Silva, F. L. D., Hernandez-Leal, P., Kartal, B., & Taylor, M. E. (2020). Providing Uncertainty-Based Advice for Deep Reinforcement Learning Agents (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13913-13914. https://doi.org/10.1609/aaai.v34i10.7229

Issue

Section

Student Abstract Track