Learning Interpretable Temporal Properties from Positive Examples Only
Keywords:KRR: Knowledge Representation Languages, ML: Transparent, Interpretable, Explainable ML, CSO: Constraint Satisfaction
AbstractWe consider the problem of explaining the temporal behavior of black-box systems using human-interpretable models. Following recent research trends, we rely on the fundamental yet interpretable models of deterministic finite automata (DFAs) and linear temporal logic (LTL_f) formulas. In contrast to most existing works for learning DFAs and LTL_f formulas, we consider learning from only positive examples. Our motivation is that negative examples are generally difficult to observe, in particular, from black-box systems. To learn meaningful models from positive examples only, we design algorithms that rely on conciseness and language minimality of models as regularizers. Our learning algorithms are based on two approaches: a symbolic and a counterexample-guided one. The symbolic approach exploits an efficient encoding of language minimality as a constraint satisfaction problem, whereas the counterexample-guided one relies on generating suitable negative examples to guide the learning. Both approaches provide us with effective algorithms with minimality guarantees on the learned models. To assess the effectiveness of our algorithms, we evaluate them on a few practical case studies.
How to Cite
Roy, R., Gaglione, J.-R., Baharisangari, N., Neider, D., Xu, Z., & Topcu, U. (2023). Learning Interpretable Temporal Properties from Positive Examples Only. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 6507-6515. https://doi.org/10.1609/aaai.v37i5.25800
AAAI Technical Track on Knowledge Representation and Reasoning