Adversarial Defence by Diversified Simultaneous Training of Deep Ensembles

Authors

  • Bo Huang Dongguan University of Technology, Dongguan, China
  • Zhiwei Ke Dongguan University of Technology, Dongguan, China Computer Vision Institute, Shenzhen University, Shenzhen, China
  • Yi Wang Dongguan University of Technology, Dongguan, China
  • Wei Wang The University of New South Wales, Sydney, Australia
  • Linlin Shen Computer Vision Institute, Shenzhen University, Shenzhen, China Shenzhen Institute of Artificial Intelligence & Robotics for Society
  • Feng Liu Computer Vision Institute, Shenzhen University, Shenzhen, China

Keywords:

Adversarial Learning & Robustness, Adversarial Attacks & Robustness

Abstract

Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mostly devised on individual classifiers. Recent studies showed that it is viable to increase adversarial robustness by promoting diversity over an ensemble of models. In this paper, we propose adversarial defence by encouraging ensemble diversity on learning high-level feature representations and gradient dispersion in simultaneous training of deep ensemble networks. We perform extensive evaluations under white-box and black-box attacks including transferred examples and adaptive attacks. Our approach achieves a significant gain of up to 52% in adversarial robustness, compared with the baseline and the state-of-the-art method on image benchmarks with complex data scenes. The proposed approach complements the defence paradigm of adversarial training, and can further boost the performance. The source code is available at https://github.com/ALIS-Lab/AAAI2021-PDD.

Downloads

Published

2021-05-18

How to Cite

Huang, B., Ke, Z., Wang, Y., Wang, W., Shen, L., & Liu, F. (2021). Adversarial Defence by Diversified Simultaneous Training of Deep Ensembles. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7823-7831. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16955

Issue

Section

AAAI Technical Track on Machine Learning II