Dynamically Constructed (PO)MDPs for Adaptive Robot Planning

Authors

  • Shiqi Zhang Cleveland State University
  • Piyush Khandelwal The University of Texas at Austin
  • Peter Stone The University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v31i1.11042

Abstract

To operate in human-robot coexisting environments, intelligent robots need to simultaneously reason with commonsense knowledge and plan under uncertainty. Markov decision processes (MDPs) and partially observable MDPs (POMDPs), are good at planning under uncertainty toward maximizing long-term rewards; P-LOG, a declarative programming language under Answer Set semantics, is strong in commonsense reasoning. In this paper, we present a novel algorithm called iCORPP to dynamically reason about, and construct (PO)MDPs using P-LOG. iCORPP successfully shields exogenous domain attributes from (PO)MDPs, which limits computational complexity and enables (PO)MDPs to adapt to the value changes these attributes produce. We conduct a number of experimental trials using two example problems in simulation and demonstrate iCORPP on a real robot. Results show significant improvements compared to competitive baselines.

Downloads

Published

2017-02-12

How to Cite

Zhang, S., Khandelwal, P., & Stone, P. (2017). Dynamically Constructed (PO)MDPs for Adaptive Robot Planning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11042