Robust Finite-State Controllers for Uncertain POMDPs


  • Murat Cubuktepe The University of Texas at Austin
  • Nils Jansen Radboud University Nijmegen
  • Sebastian Junges University of California, Berkeley
  • Ahmadreza Marandi Eindhoven University of Technology
  • Marnix Suilen Radboud University Nijmegen
  • Ufuk Topcu University of Texas at Austin



Planning under Uncertainty, Planning with Markov Models (MDPs, POMDPs)


Uncertain partially observable Markov decision processes (uPOMDPs) allow the probabilistic transition and observation functions of standard POMDPs to belong to a so-called uncertainty set. Such uncertainty, referred to as epistemic uncertainty, captures uncountable sets of probability distributions caused by, for instance, a lack of data available. We develop an algorithm to compute finite-memory policies for uPOMDPs that robustly satisfy specifications against any admissible distribution. In general, computing such policies is theoretically and practically intractable. We provide an efficient solution to this problem in four steps. (1) We state the underlying problem as a nonconvex optimization problem with infinitely many constraints. (2) A dedicated dualization scheme yields a dual problem that is still nonconvex but has finitely many constraints. (3) We linearize this dual problem and (4) solve the resulting finite linear program to obtain locally optimal solutions to the original problem. The resulting problem formulation is exponentially smaller than those resulting from existing methods. We demonstrate the applicability of our algorithm using large instances of an aircraft collision-avoidance scenario and a novel spacecraft motion planning case study.




How to Cite

Cubuktepe, M., Jansen, N., Junges, S., Marandi, A., Suilen, M., & Topcu, U. (2021). Robust Finite-State Controllers for Uncertain POMDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11792-11800.



AAAI Technical Track on Planning, Routing, and Scheduling