Verifiable and Compositional Reinforcement Learning Systems

Authors

  • Cyrus Neary Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, USA
  • Christos Verginis Division of Signals and Systems, Department of Electrical Engineering, Uppsala University, Sweden
  • Murat Cubuktepe Department of Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin, USA
  • Ufuk Topcu Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, USA Department of Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin, USA

Keywords:

Planning For Automation Of Reinforcement Learning, Reinforcement Learning, Compositional Systems, Formal Verification, Parametric Markov Decision Process, Task Decomposition

Abstract

We propose a framework for verifiable and compositional reinforcement learning (RL) in which a collection of RL subsystems, each of which learns to accomplish a separate subtask, are composed to achieve an overall task. The framework consists of a high-level model, represented as a parametric Markov decision process (pMDP) which is used to plan and to analyze compositions of subsystems, and of the collection of low-level subsystems themselves. By defining interfaces between the subsystems, the framework enables automatic decompositions of task specifications, e.g., reach a target set of states with a probability of at least 0.95, into individual subtask specifications, i.e. achieve the subsystem's exit conditions with at least some minimum probability, given that its entry conditions are met. This in turn allows for the independent training and testing of the subsystems; if they each learn a policy satisfying the appropriate subtask specification, then their composition is guaranteed to satisfy the overall task specification. Conversely, if the subtask specifications cannot all be satisfied by the learned policies, we present a method, formulated as the problem of finding an optimal set of parameters in the pMDP, to automatically update the subtask specifications to account for the observed shortcomings. The result is an iterative procedure for defining subtask specifications, and for training the subsystems to meet them. As an additional benefit, this procedure allows for particularly challenging or important components of an overall task to be identified automatically, and focused on, during training. Experimental results demonstrate the presented framework's novel capabilities in both discrete and continuous RL settings. A collection of RL subsystems are trained, using proximal policy optimization algorithms, to navigate different portions of a labyrinth environment. A cross-labyrinth task specification is then decomposed into subtask specifications. Challenging portions of the labyrinth are automatically avoided if their corresponding subsystems cannot learn satisfactory policies within allowed training budgets. Unnecessary subsystems are not trained at all. The result is a compositional RL system that efficiently learns to satisfy task specifications.

Downloads

Published

2022-06-13

How to Cite

Neary, C., Verginis, C., Cubuktepe, M., & Topcu, U. (2022). Verifiable and Compositional Reinforcement Learning Systems. Proceedings of the International Conference on Automated Planning and Scheduling, 32(1), 615-623. Retrieved from https://ojs.aaai.org/index.php/ICAPS/article/view/19849