Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models

Authors

  • Zhengguang Wang University of Virginia, Charlottesville, Virginia

DOI:

https://doi.org/10.1609/aaai.v38i21.30559

Keywords:

Interpretability, Sensitivity Analysis, Time-Series Deep Learning

Abstract

This work undertakes studies to evaluate Interpretability Methods for Time Series Deep Learning. Sensitivity analysis assesses how input changes affect the output, constituting a key component of interpretation. Among the post-hoc interpretation methods such as back-propagation, perturbation, and approximation, my work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances. Specifically, my work intends to answer three research questions: 1) Do different sensitivity analysis methods yield comparable outputs and attribute importance rankings? 2) Using the same sensitivity analysis method, do different Deep Learning models impact the output of the sensitivity analysis? 3) How well do the results from sensitivity analysis methods align with the ground truth?

Downloads

Published

2024-03-24

How to Cite

Wang, Z. (2024). Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23768-23770. https://doi.org/10.1609/aaai.v38i21.30559