TY - JOUR
AU - Pitis, Silviu
PY - 2019/07/17
Y2 - 2023/01/28
TI - Rethinking the Discount Factor in Reinforcement Learning: A Decision Theoretic Approach
JF - Proceedings of the AAAI Conference on Artificial Intelligence
JA - AAAI
VL - 33
IS - 01
SE - AAAI Technical Track: Reasoning under Uncertainty
DO - 10.1609/aaai.v33i01.33017949
UR - https://ojs.aaai.org/index.php/AAAI/article/view/4795
SP - 7949-7956
AB - <p>Reinforcement learning (RL) agents have traditionally been tasked with maximizing the value function of a Markov decision process (MDP), either in continuous settings, with fixed discount factor <em>γ <</em> 1, or in episodic settings, with <em>γ</em> = 1. While this has proven effective for specific tasks with welldefined objectives (e.g., games), it has never been established that fixed discounting is suitable for general purpose use (e.g., as a model of human preferences). This paper characterizes rationality in sequential decision making using a set of seven axioms and arrives at a form of discounting that generalizes traditional fixed discounting. In particular, our framework admits a state-action dependent “discount” factor that is not constrained to be less than 1, so long as there is eventual long run discounting. Although this broadens the range of possible preference structures in continuous settings, we show that there exists a unique “optimizing MDP” with fixed <em>γ <</em> 1 whose optimal value function matches the true utility of the optimal policy, and we quantify the difference between value and utility for suboptimal policies. Our work can be seen as providing a normative justification for (a slight generalization of) Martha White’s RL task formalism (2017) and other recent departures from the traditional RL, and is relevant to task specification in RL, inverse RL and preference-based RL.</p>
ER -