A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories


  • Zhaohui Che Shanghai Jiao Tong University
  • Ali Borji MarkableAI
  • Guangtao Zhai Shanghai Jiao Tong University
  • Suiyi Ling University of Nantes
  • Jing Li Alibaba Group
  • Patrick Le Callet University of Nantes




Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of pre-trained source models can transfer to other new target models, thus pose a security threat to black-box applications (when the attackers have no access to the target models). Despite adopting diverse architectures and parameters, source and target models often share similar decision boundaries. Therefore, if an adversary is capable of fooling several source models concurrently, it can potentially capture intrinsic transferable adversarial information that may allow it to fool a broad class of other black-box target models. Current ensemble attacks, however, only consider a limited number of source models to craft an adversary, and obtain poor transferability. In this paper, we propose a novel black-box attack, dubbed Serial-Mini-Batch-Ensemble-Attack (SMBEA). SMBEA divides a large number of pre-trained source models into several mini-batches. For each single batch, we design 3 new ensemble strategies to improve the intra-batch transferability. Besides, we propose a new algorithm that recursively accumulates the “long-term” gradient memories of the previous batch to the following batch. This way, the learned adversarial information can be preserved and the inter-batch transferability can be improved. Experiments indicate that our method outperforms state-of-the-art ensemble attacks over multiple pixel-to-pixel vision tasks including image translation and salient region prediction. Our method successfully fools two online black-box saliency prediction systems including DeepGaze-II (Kummerer 2017) and SALICON (Huang et al. 2017). Finally, we also contribute a new repository to promote the research on adversarial attack and defense over pixel-to-pixel tasks: https://github.com/CZHQuality/AAA-Pix2pix.




How to Cite

Che, Z., Borji, A., Zhai, G., Ling, S., Li, J., & Le Callet, P. (2020). A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3405-3413. https://doi.org/10.1609/aaai.v34i04.5743



AAAI Technical Track: Machine Learning