PAC Optimal Planning for Invasive Species Management: Improved Exploration for Reinforcement Learning from Simulator-Defined MDPs

Authors

  • Thomas Dietterich Oregon State University
  • Majid Alkaee Taleghan Oregon State University
  • Mark Crowley Oregon State University

DOI:

https://doi.org/10.1609/aaai.v27i1.8487

Keywords:

MDP Planning, Reinforcement Learning, Simulator-Defined MDPs

Abstract

Often the most practical way to define a Markov Decision Process (MDP) is as a simulator that, given a state and an action, produces a resulting state and immediate reward sampled from the corresponding distributions. Simulators in natural resource management can be very expensive to execute, so that the time required to solve such MDPs is dominated by the number of calls to the simulator. This paper presents an algorithm, DDV, that combines improved confidence intervals on the Q values (as in interval estimation) with a novel upper bound on the discounted state occupancy probabilities to intelligently choose state-action pairs to explore. We prove that this algorithm terminates with a policy whose value is within epsilon of the optimal policy (with probability 1-delta) after making only polynomially-many calls to the simulator. Experiments on one benchmark MDP and on an MDP for invasive species management show very large reductions in the number of simulator calls required.

Downloads

Published

2013-06-29

How to Cite

Dietterich, T., Alkaee Taleghan, M., & Crowley, M. (2013). PAC Optimal Planning for Invasive Species Management: Improved Exploration for Reinforcement Learning from Simulator-Defined MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 27(1), 1270-1276. https://doi.org/10.1609/aaai.v27i1.8487

Issue

Section

Computational Sustainability and Artificial Intelligence