Black-Box Adversarial Attack on Time Series Classification

Authors

  • Daizong Ding Fudan University
  • Mi Zhang Fudan University
  • Fuli Feng University of Science and Technology of China
  • Yuanmin Huang Fudan University
  • Erling Jiang Fudan University
  • Min Yang Fudan University

DOI:

https://doi.org/10.1609/aaai.v37i6.25896

Keywords:

ML: Time-Series/Data Streams, ML: Adversarial Learning & Robustness

Abstract

With the increasing use of deep neural network (DNN) in time series classification (TSC), recent work reveals the threat of adversarial attack, where the adversary can construct adversarial examples to cause model mistakes. However, existing researches on the adversarial attack of TSC typically adopt an unrealistic white-box setting with model details transparent to the adversary. In this work, we study a more rigorous black-box setting with attack detection applied, which restricts gradient access and requires the adversarial example to be also stealthy. Theoretical analyses reveal that the key lies in: estimating black-box gradient with diversity and non-convexity of TSC models resolved, and restricting the l0 norm of the perturbation to construct adversarial samples. Towards this end, we propose a new framework named BlackTreeS, which solves the hard optimization issue for adversarial example construction with two simple yet effective modules. In particular, we propose a tree search strategy to find influential positions in a sequence, and independently estimate the black-box gradients for these positions. Extensive experiments on three real-world TSC datasets and five DNN based models validate the effectiveness of BlackTreeS, e.g., it improves the attack success rate from 19.3% to 27.3%, and decreases the detection success rate from 90.9% to 6.8% for LSTM on the UWave dataset.

Downloads

Published

2023-06-26

How to Cite

Ding, D., Zhang, M., Feng, F., Huang, Y., Jiang, E., & Yang, M. (2023). Black-Box Adversarial Attack on Time Series Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7358-7368. https://doi.org/10.1609/aaai.v37i6.25896

Issue

Section

AAAI Technical Track on Machine Learning I