The Choice Function Framework for Online Policy Improvement


  • Murugeswari Issakkimuthu Oregon State University
  • Alan Fern Oregon State University
  • Prasad Tadepalli Oregon State University



There are notable examples of online search improving over hand-coded or learned policies (e.g. AlphaZero) for sequential decision making. It is not clear, however, whether or not policy improvement is guaranteed for many of these approaches, even when given a perfect leaf evaluation function and transition model. Indeed, simple counterexamples show that seemingly reasonable online search procedures can hurt performance compared to the original policy. To address this issue, we introduce the choice function framework for analyzing online search procedures for policy improvement. A choice function specifies the actions to be considered at every node of a search tree, with all other actions being pruned. Our main contribution is to give sufficient conditions for stationary and non-stationary choice functions to guarantee that the value achieved by online search is no worse than the original policy. In addition, we describe a general parametric class of choice functions that satisfy those conditions and present an illustrative use case of the empirical utility of the framework.




How to Cite

Issakkimuthu, M., Fern, A., & Tadepalli, P. (2020). The Choice Function Framework for Online Policy Improvement. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06), 10178-10185.



AAAI Technical Track: Reasoning under Uncertainty