Optimizing Expectation with Guarantees in POMDPs

Authors

  • Krishnendu Chatterjee The Institute of Science and Technology Austria
  • Petr Novotný The Institute of Science and Technology Austria
  • Guillermo Pérez Université Libre de Bruxelles
  • Jean-François Raskin Université Libre de Bruxelles
  • Đorđe Žikelić University of Cambridge

DOI:

https://doi.org/10.1609/aaai.v31i1.11046

Keywords:

Partially-observable Markov decision processes, Discounted payoff, Probabilistic planning, Verification

Abstract

A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications. Recently, there has been a surge of interest in POMDPs where the goal is to maximize the probability to ensure that the payoff is at least a given threshold, but these approaches do not consider any optimization beyond satisfying this threshold constraint. In this work we go beyond both the “expectation” and “threshold” approaches and consider a “guaranteed payoff optimization (GPO)” problem for POMDPs, where we are given a threshold t and the objective is to find a policy σ such that a) each possible outcome of σ yields a discounted-sum payoff of at least t, and b) the expected discounted-sum payoff of σ is optimal (or near-optimal) among all policies satisfying a). We present a practical approach to tackle the GPO problem and evaluate it on standard POMDP benchmarks.

Downloads

Published

2017-02-12

How to Cite

Chatterjee, K., Novotný, P., Pérez, G., Raskin, J.-F., & Žikelić, Đorđe. (2017). Optimizing Expectation with Guarantees in POMDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11046

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty