Learning Interpretable Temporal Properties from Positive Examples Only

Authors

  • Rajarshi Roy Max Planck Institute for Software Systems, Kaiserslautern, Germany
  • Jean-Raphaël Gaglione University of Texas at Austin, Texas, USA
  • Nasim Baharisangari Arizona State University, Arizona, USA
  • Daniel Neider TU Dortmund University, Dortmund, Germany Center for Trustworthy Data Science and Security, University Alliance Ruhr, Germany
  • Zhe Xu Arizona State University, Arizona, USA
  • Ufuk Topcu University of Texas at Austin, Texas, USA

DOI:

https://doi.org/10.1609/aaai.v37i5.25800

Keywords:

KRR: Knowledge Representation Languages, ML: Transparent, Interpretable, Explainable ML, CSO: Constraint Satisfaction

Abstract

We consider the problem of explaining the temporal behavior of black-box systems using human-interpretable models. Following recent research trends, we rely on the fundamental yet interpretable models of deterministic finite automata (DFAs) and linear temporal logic (LTL_f) formulas. In contrast to most existing works for learning DFAs and LTL_f formulas, we consider learning from only positive examples. Our motivation is that negative examples are generally difficult to observe, in particular, from black-box systems. To learn meaningful models from positive examples only, we design algorithms that rely on conciseness and language minimality of models as regularizers. Our learning algorithms are based on two approaches: a symbolic and a counterexample-guided one. The symbolic approach exploits an efficient encoding of language minimality as a constraint satisfaction problem, whereas the counterexample-guided one relies on generating suitable negative examples to guide the learning. Both approaches provide us with effective algorithms with minimality guarantees on the learned models. To assess the effectiveness of our algorithms, we evaluate them on a few practical case studies.

Downloads

Published

2023-06-26

How to Cite

Roy, R., Gaglione, J.-R., Baharisangari, N., Neider, D., Xu, Z., & Topcu, U. (2023). Learning Interpretable Temporal Properties from Positive Examples Only. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 6507-6515. https://doi.org/10.1609/aaai.v37i5.25800

Issue

Section

AAAI Technical Track on Knowledge Representation and Reasoning