Sparse Adversarial Perturbations for Videos


  • Xingxing Wei Tsinghua University
  • Jun Zhu Tsinghua University
  • Sha Yuan Tsinghua University
  • Hang Su Tsinghua Univiersity



Although adversarial samples of deep neural networks (DNNs) have been intensively studied on static images, their extensions in videos are never explored. Compared with images, attacking a video needs to consider not only spatial cues but also temporal cues. Moreover, to improve the imperceptibility as well as reduce the computation cost, perturbations should be added on as few frames as possible, i.e., adversarial perturbations are temporally sparse. This further motivates the propagation of perturbations, which denotes that perturbations added on the current frame can transfer to the next frames via their temporal interactions. Thus, no (or few) extra perturbations are needed for these frames to misclassify them. To this end, we propose the first white-box video attack method, which utilizes an l2,1-norm based optimization algorithm to compute the sparse adversarial perturbations for videos. We choose the action recognition as the targeted task, and networks with a CNN+RNN architecture as threat models to verify our method. Thanks to the propagation, we can compute perturbations on a shortened version video, and then adapt them to the long version video to fool DNNs. Experimental results on the UCF101 dataset demonstrate that even only one frame in a video is perturbed, the fooling rate can still reach 59.7%.




How to Cite

Wei, X., Zhu, J., Yuan, S., & Su, H. (2019). Sparse Adversarial Perturbations for Videos. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8973-8980.



AAAI Technical Track: Vision