Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes

Authors

  • Florent Delgrange Vrije Universiteit Brussel
  • Ann Nowé Vrije Universiteit Brussel
  • Guillermo A. Pérez University of Antwerp

DOI:

https://doi.org/10.1609/aaai.v36i6.20602

Keywords:

Machine Learning (ML), Knowledge Representation And Reasoning (KRR), Reasoning Under Uncertainty (RU)

Abstract

We consider the challenge of policy simplification and verification in the context of policies learned through reinforcement learning (RL) in continuous environments. In well-behaved settings, RL algorithms have convergence guarantees in the limit. While these guarantees are valuable, they are insufficient for safety-critical applications. Furthermore, they are lost when applying advanced techniques such as deep-RL. To recover guarantees when applying advanced RL algorithms to more complex environments with (i) reachability, (ii) safety-constrained reachability, or (iii) discounted-reward objectives, we build upon the DeepMDP framework to derive new bisimulation bounds between the unknown environment and a learned discrete latent model of it. Our bisimulation bounds enable the application of formal methods for Markov decision processes. Finally, we show how one can use a policy obtained via state-of-the-art RL to efficiently train a variational autoencoder that yields a discrete latent model with provably approximately correct bisimulation guarantees. Additionally, we obtain a distilled version of the policy for the latent model.

Downloads

Published

2022-06-28

How to Cite

Delgrange, F., Nowé, A., & Pérez, G. A. (2022). Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6497-6505. https://doi.org/10.1609/aaai.v36i6.20602

Issue

Section

AAAI Technical Track on Machine Learning I