Policy Evaluation with Temporal Differences: A Survey and Comparison (Extended Abstract)

Authors

  • Christoph Dann Carnegie Mellon University
  • Gerhard Neumann Technische Universität Darmstadt
  • Jan Peters MaxPlanck Institute for Intelligent Systems

DOI:

https://doi.org/10.1609/icaps.v25i1.13686

Abstract

Value functions are an essential tool for solving sequential decision making problems such as Markov decision processes (MDPs). Computing the value function for a given policy (policy evaluation) is not only important for determining the quality of the policy but also a key step in prominent policy-iteration-type algorithms. In common settings where a model of the Markov decision process is not available or too complex to handle directly, an approximation of the value function is usually estimated from samples of the process. Linearly parameterized estimates are often preferred due to their simplicity and strong stability guarantees. Since the late 1980s, research on policy evaluation in these scenarios has been dominated by temporal-difference (TD) methods because of their data-efficiency. However, several core issues have only been tackled recently, including stability guarantees for off-policy estimation where the samples are not generated by the policy to evaluate. Together with improving sample efficiency and probabilistic treatment of uncertainty in the value estimates, these efforts have lead to numerous new temporal-difference algorithms. These methods are scattered over the literature and usually only compared to most similar approaches. The article therefore aims at presenting the state of the art of policy evaluation with temporal differences and linearly parameterized value functions in discounted MDPs as well as a more comprehensive comparison of these approaches.We put the algorithms in a unified framework of function optimization, with focus on surrogate cost functions and optimization strategies, to identify similarities and differences between the methods. In addition, important extensions of the base methods such as off-policy estimation and eligibility traces for better bias-variance trade-off, as well as regularization in high dimensional feature spaces, are discussed.

Downloads

Published

2015-04-08

How to Cite

Dann, C., Neumann, G., & Peters, J. (2015). Policy Evaluation with Temporal Differences: A Survey and Comparison (Extended Abstract). Proceedings of the International Conference on Automated Planning and Scheduling, 25(1), 359-360. https://doi.org/10.1609/icaps.v25i1.13686