Bounding the Probability of Resource Constraint Violations in Multi-Agent MDPs

Authors

  • Frits de Nijs Delft University of Technology
  • Erwin Walraven Delft University of Technology
  • Mathijs de Weerdt Delft University of Technology
  • Matthijs Spaan Delft University of Technology

DOI:

https://doi.org/10.1609/aaai.v31i1.11037

Keywords:

Markov Decision Process, Resource constraints, Planning under uncertainty

Abstract

Multi-agent planning problems with constraints on global resource consumption occur in several domains. Existing algorithms for solving Multi-agent Markov Decision Processes can compute policies that meet a resource constraint in expectation, but these policies provide no guarantees on the probability that a resource constraint violation will occur. We derive a method to bound constraint violation probabilities using Hoeffding's inequality. This method is applied to two existing approaches for computing policies satisfying constraints: the Constrained MDP framework and a Column Generation approach. We also introduce an algorithm to adaptively relax the bound up to a given maximum violation tolerance. Experiments on a hard toy problem show that the resulting policies outperform static optimal resource allocations to an arbitrary level. By testing the algorithms on more realistic planning domains from the literature, we demonstrate that the adaptive bound is able to efficiently trade off violation probability with expected value, outperforming state-of-the-art planners.

Downloads

Published

2017-02-12

How to Cite

de Nijs, F., Walraven, E., de Weerdt, M., & Spaan, M. (2017). Bounding the Probability of Resource Constraint Violations in Multi-Agent MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11037