Rethinking the Discount Factor in Reinforcement Learning: A Decision Theoretic Approach

Authors

  • Silviu Pitis University of Toronto

DOI:

https://doi.org/10.1609/aaai.v33i01.33017949

Abstract

Reinforcement learning (RL) agents have traditionally been tasked with maximizing the value function of a Markov decision process (MDP), either in continuous settings, with fixed discount factor γ < 1, or in episodic settings, with γ = 1. While this has proven effective for specific tasks with welldefined objectives (e.g., games), it has never been established that fixed discounting is suitable for general purpose use (e.g., as a model of human preferences). This paper characterizes rationality in sequential decision making using a set of seven axioms and arrives at a form of discounting that generalizes traditional fixed discounting. In particular, our framework admits a state-action dependent “discount” factor that is not constrained to be less than 1, so long as there is eventual long run discounting. Although this broadens the range of possible preference structures in continuous settings, we show that there exists a unique “optimizing MDP” with fixed γ < 1 whose optimal value function matches the true utility of the optimal policy, and we quantify the difference between value and utility for suboptimal policies. Our work can be seen as providing a normative justification for (a slight generalization of) Martha White’s RL task formalism (2017) and other recent departures from the traditional RL, and is relevant to task specification in RL, inverse RL and preference-based RL.

Downloads

Published

2019-07-17

How to Cite

Pitis, S. (2019). Rethinking the Discount Factor in Reinforcement Learning: A Decision Theoretic Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7949-7956. https://doi.org/10.1609/aaai.v33i01.33017949

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty