Verifiable and Interpretable Reinforcement Learning through Program Synthesis


  • Abhinav Verma Rice University



We study the problem of generating interpretable and verifiable policies for Reinforcement Learning (RL). Unlike the popular Deep Reinforcement Learning (DRL) paradigm, in which the policy is represented by a neural network, the aim of this work is to find policies that can be represented in highlevel programming languages. Such programmatic policies have several benefits, including being more easily interpreted than neural networks, and being amenable to verification by scalable symbolic methods. The generation methods for programmatic policies also provide a mechanism for systematically using domain knowledge for guiding the policy search. The interpretability and verifiability of these policies provides the opportunity to deploy RL based solutions in safety critical environments. This thesis draws on, and extends, work from both the machine learning and formal methods communities.




How to Cite

Verma, A. (2019). Verifiable and Interpretable Reinforcement Learning through Program Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9902-9903.



Doctoral Consortium Track