Learning Interpretable Models Expressed in Linear Temporal Logic

Authors

  • Alberto Camacho University of Toronto
  • Sheila A. McIlraith University of Toronto

DOI:

https://doi.org/10.1609/icaps.v29i1.3529

Abstract

We examine the problem of learning models that characterize the high-level behavior of a system based on observation traces. Our aim is to develop models that are human interpretable. To this end, we introduce the problem of learning a Linear Temporal Logic (LTL) formula that parsimoniously captures a given set of positive and negative example traces. Our approach to learning LTL exploits a symbolic state representation, searching through a space of labeled skeleton formulae to construct an alternating automaton that models observed behavior, from which the LTL can be read off. Construction of interpretable behavior models is central to a diversity of applications related to planning and plan recognition. We showcase the relevance and significance of our work in the context of behavior description and discrimination: i) active learning of a human-interpretable behavior model that describes observed examples obtained by interaction with an oracle; ii) passive learning of a classifier that discriminates individual agents, based on the human-interpretable signature way in which they perform particular tasks. Experiments demonstrate the effectiveness of our symbolic model learning approach in providing human-interpretable models and classifiers from reduced example sets.

Downloads

Published

2021-05-25

How to Cite

Camacho, A., & McIlraith, S. A. (2021). Learning Interpretable Models Expressed in Linear Temporal Logic. Proceedings of the International Conference on Automated Planning and Scheduling, 29(1), 621-630. https://doi.org/10.1609/icaps.v29i1.3529