Feature Importance Explanations for Temporal Black-Box Models

Authors

  • Akshay Sood University of Wisconsin-Madison
  • Mark Craven University of Wisconsin-Madison

DOI:

https://doi.org/10.1609/aaai.v36i8.20810

Keywords:

Machine Learning (ML)

Abstract

Models in the supervised learning framework may capture rich and complex representations over the features that are hard for humans to interpret. Existing methods to explain such models are often specific to architectures and data where the features do not have a time-varying component. In this work, we propose TIME, a method to explain models that are inherently temporal in nature. Our approach (i) uses a model-agnostic permutation-based approach to analyze global feature importance, (ii) identifies the importance of salient features with respect to their temporal ordering as well as localized windows of influence, and (iii) uses hypothesis testing to provide statistical rigor.

Downloads

Published

2022-06-28

How to Cite

Sood, A., & Craven, M. (2022). Feature Importance Explanations for Temporal Black-Box Models. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8351-8360. https://doi.org/10.1609/aaai.v36i8.20810

Issue

Section

AAAI Technical Track on Machine Learning III