Reinforcement Learning with Trajectory Feedback

Authors

  • Yonathan Efroni Microsoft Research, New York Technion, Israel Institute of Technology
  • Nadav Merlis Technion, Israel Institute of Technology
  • Shie Mannor Technion, Israel Institute of Technology Nvidia Research, Israel

DOI:

https://doi.org/10.1609/aaai.v35i8.16895

Keywords:

Reinforcement Learning

Abstract

The standard feedback model of reinforcement learning requires revealing the reward of every visited state-action pair. However, in practice, it is often the case that such frequent feedback is not available. In this work, we take a first step towards relaxing this assumption and require a weaker form of feedback, which we refer to as \emph{trajectory feedback}. Instead of observing the reward obtained after every action, we assume we only receive a score that represents the quality of the whole trajectory observed by the agent, namely, the sum of all rewards obtained over this trajectory. We extend reinforcement learning algorithms to this setting, based on least-squares estimation of the unknown reward, for both the known and unknown transition model cases, and study the performance of these algorithms by analyzing their regret. For cases where the transition model is unknown, we offer a hybrid optimistic-Thompson Sampling approach that results in a tractable algorithm.

Downloads

Published

2021-05-18

How to Cite

Efroni, Y., Merlis, N., & Mannor, S. (2021). Reinforcement Learning with Trajectory Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7288-7295. https://doi.org/10.1609/aaai.v35i8.16895

Issue

Section

AAAI Technical Track on Machine Learning I