Planning with Participation Constraints

Authors

  • Hanrui Zhang Carnegie Mellon University
  • Yu Cheng University of Illinois at Chicago
  • Vincent Conitzer Duke University

DOI:

https://doi.org/10.1609/aaai.v36i5.20462

Keywords:

Game Theory And Economic Paradigms (GTEP)

Abstract

We pose and study the problem of planning in Markov decision processes (MDPs), subject to participation constraints as studied in mechanism design. In this problem, a planner must work with a self-interested agent on a given MDP. Each action in the MDP provides an immediate reward to the planner and a (possibly different) reward to the agent. The agent has no control in choosing the actions, but has the option to end the entire process at any time. The goal of the planner is to find a policy that maximizes her cumulative reward, taking into consideration the agent's ability to terminate. We give a fully polynomial-time approximation scheme for this problem. En route, we present polynomial-time algorithms for computing (exact) optimal policies for important special cases of this problem, including when the time horizon is constant, or when the MDP exhibits a "definitive decisions" property. We illustrate our algorithms with two different game-theoretic applications: the problem of assigning rides in ride-sharing and the problem of designing screening policies. Our results imply efficient algorithms for computing (approximately) optimal policies in both applications.

Downloads

Published

2022-06-28

How to Cite

Zhang, H., Cheng, Y., & Conitzer, V. (2022). Planning with Participation Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5260-5267. https://doi.org/10.1609/aaai.v36i5.20462

Issue

Section

AAAI Technical Track on Game Theory and Economic Paradigms