Optimal and Efficient Stochastic Motion Planning in Partially-Known Environments

Authors

  • Ryan Luna Rice University
  • Morteza Lahijanian Rice University
  • Mark Moll Rice University
  • Lydia Kavraki Rice University

DOI:

https://doi.org/10.1609/aaai.v28i1.9054

Abstract

A framework capable of computing optimal control policies for a continuous system in the presence of both action and environment uncertainty is presented in this work. The framework decomposes the planning problem into two stages: an offline phase that reasons only over action uncertainty and an online phase that quickly reacts to the uncertain environment. Offline, a bounded-parameter Markov decision process (BMDP) is employed to model the evolution of the stochastic system over a discretization of the environment. Online, an optimal control policy over the BMDP is computed. Upon the discovery of an unknown environment feature during policy execution, the BMDP is updated and the optimal control policy is efficiently recomputed. Depending on the desired quality of the control policy, a suite of methods is presented to incorporate new information into the BMDP with varying degrees of detail online. Experiments confirm that the framework recomputes high-quality policies in seconds and is orders of magnitude faster than existing methods.

Downloads

Published

2014-06-21

How to Cite

Luna, R., Lahijanian, M., Moll, M., & Kavraki, L. (2014). Optimal and Efficient Stochastic Motion Planning in Partially-Known Environments. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9054