Interpretable Sequence Classification via Discrete Optimization

Authors

  • Maayan Shvo University of Toronto Vector Institute Schwartz Reisman Institute for Technology and Society
  • Andrew C. Li University of Toronto Vector Institute
  • Rodrigo Toro Icarte University of Toronto Vector Institute
  • Sheila A. McIlraith University of Toronto Vector Institute Schwartz Reisman Institute for Technology and Society

DOI:

https://doi.org/10.1609/aaai.v35i11.17161

Keywords:

Classification and Regression, Activity and Plan Recognition, Constraint Optimization

Abstract

Sequence classification is the task of predicting a class label given a sequence of observations. In many applications such as healthcare monitoring or intrusion detection, early classification is crucial to prompt intervention. In this work, we learn sequence classifiers that favour early classification from an evolving observation trace. While many state-of-the-art sequence classifiers are neural networks, and in particular LSTMs, our classifiers take the form of finite state automata and are learned via discrete optimization. Our automata-based classifiers are interpretable---supporting explanation, counterfactual reasoning, and human-in-the-loop modification---and have strong empirical performance. Experiments over a suite of goal recognition and behaviour classification datasets show our learned automata-based classifiers to have comparable test performance to LSTM-based classifiers, with the added advantage of being interpretable.

Downloads

Published

2021-05-18

How to Cite

Shvo, M., Li, A. C., Toro Icarte, R., & McIlraith, S. A. (2021). Interpretable Sequence Classification via Discrete Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9647-9656. https://doi.org/10.1609/aaai.v35i11.17161

Issue

Section

AAAI Technical Track on Machine Learning IV