Counterfactual eXplainable AI (XAI) Method for Deep Learning-Based Multivariate Time Series Classification

Authors

  • Alan Gabriel Paredes Cetina SnT, University of Luxembourg
  • Kaouther Benguessoum SnT, University of Luxembourg
  • Raoni Lourenco SnT, University of Luxembourg
  • Sylvain Kubler SnT, University of Luxembourg

DOI:

https://doi.org/10.1609/aaai.v40i21.38792

Abstract

Recent advances in deep learning have improved multivariate time series (MTS) classification and regression by capturing complex patterns, but their lack of transparency hinders decision-making. Explainable AI (XAI) methods offer partial insights, yet often fall short of conveying the full decision space. Counterfactual Explanations (CE) provide a promising alternative, but current approaches typically prioritize either accuracy, proximity or sparsity -- rarely all -- limiting their practical value. To address this, we propose CONFETTI, a novel multi-objective CE method for MTS. CONFETTI identifies key MTS subsequences, locates a counterfactual target, and optimally modifies the time series to balance prediction confidence, proximity and sparsity. This method provides actionable insights with minimal changes, improving interpretability, and decision support. CONFETTI is evaluated on seven MTS datasets from the UEA archive, demonstrating its effectiveness in various domains. CONFETTI consistently outperforms state-of-the-art CE methods in its optimization objectives, and in six other metrics from the literature, achieving ≥ 10% higher confidence while improving sparsity in ≥ 40%.

Downloads

Published

2026-03-14

How to Cite

Cetina, A. G. P., Benguessoum, K., Lourenco, R., & Kubler, S. (2026). Counterfactual eXplainable AI (XAI) Method for Deep Learning-Based Multivariate Time Series Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 40(21), 17393-17400. https://doi.org/10.1609/aaai.v40i21.38792

Issue

Section

AAAI Technical Track on Humans and AI