Model-Free Preference-Based Reinforcement Learning

Authors

  • Christian Wirth Technische Universität Darmstadt
  • Johannes Fürnkranz Technische Universität Darmstadt
  • Gerhard Neumann Technische Universität Darmstadt

DOI:

https://doi.org/10.1609/aaai.v30i1.10269

Keywords:

Reinforcement Learning, Preferences, Model-Free, Relative Entropy, Bayesian

Abstract

Specifying a numeric reward function for reinforcement learning typically requires a lot of hand-tuning from a human expert. In contrast, preference-based reinforcement learning (PBRL) utilizes only pairwise comparisons between trajectories as a feedback signal, which are often more intuitive to specify. Currently available approaches to PBRL for control problems with continuous state/action spaces require a known or estimated model, which is often not available and hard to learn. In this paper, we integrate preference-based estimation of the reward function into a model-free reinforcement learning (RL) algorithm, resulting in a model-free PBRL algorithm. Our new algorithm is based on Relative Entropy Policy Search (REPS), enabling us to utilize stochastic policies and to directly control the greediness of the policy update. REPS decreases exploration of the policy slowly by limiting the relative entropy of the policy update, which ensures that the algorithm is provided with a versatile set of trajectories, and consequently with informative preferences. The preference-based estimation is computed using a sample-based Bayesian method, which can also estimate the uncertainty of the utility. Additionally, we also compare to a linear solvable approximation, based on inverse RL. We show that both approaches perform favourably to the current state-of-the-art. The overall result is an algorithm that can learn non-parametric continuous action policies from a small number of preferences.

Downloads

Published

2016-03-02

How to Cite

Wirth, C., Fürnkranz, J., & Neumann, G. (2016). Model-Free Preference-Based Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10269

Issue

Section

Technical Papers: Machine Learning Methods