Policy Learning for Robust Markov Decision Process with a Mismatched Generative Model

Authors

  • Jialian Li Tsinghua University
  • Tongzheng Ren UT Austin & Google Brain
  • Dong Yan Tsinghua University
  • Hang Su Tsinghua Univiersity
  • Jun Zhu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v36i7.20705

Keywords:

Machine Learning (ML)

Abstract

In high-stake scenarios like medical treatment and auto-piloting, it's risky or even infeasible to collect online experimental data to train the agent. Simulation-based training can alleviate this issue, but may suffer from its inherent mismatches from the simulator and real environment. It is therefore imperative to utilize the simulator to learn a robust policy for the real-world deployment. In this work, we consider policy learning for Robust Markov Decision Processes (RMDP), where the agent tries to seek a robust policy with respect to unexpected perturbations on the environments. Specifically, we focus on the setting where the training environment can be characterized as a generative model and a constrained perturbation can be added to the model during testing. Our goal is to identify a near-optimal robust policy for the perturbed testing environment, which introduces additional technical difficulties as we need to simultaneously estimate the training environment uncertainty from samples and find the worst-case perturbation for testing. To solve this issue, we propose a generic method which formalizes the perturbation as an opponent to obtain a two-player zero-sum game, and further show that the Nash Equilibrium corresponds to the robust policy. We prove that, with a polynomial number of samples from the generative model, our algorithm can find a near-optimal robust policy with a high probability. Our method is able to deal with general perturbations under some mild assumptions and can also be extended to more complex problems like robust partial observable Markov decision process, thanks to the game-theoretical formulation.

Downloads

Published

2022-06-28

How to Cite

Li, J., Ren, T., Yan, D., Su, H., & Zhu, J. (2022). Policy Learning for Robust Markov Decision Process with a Mismatched Generative Model. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7417-7425. https://doi.org/10.1609/aaai.v36i7.20705

Issue

Section

AAAI Technical Track on Machine Learning II