Combating Adversaries with Anti-adversaries

Authors

  • Motasem Alfarra King Abdullah University of Science and Technology (KAUST)
  • Juan C. Perez Universidad de los Andes King Abdullah University of Science and Technology (KAUST)
  • Ali Thabet Facebook
  • Adel Bibi University of Oxford
  • Philip H.S. Torr University of Oxford
  • Bernard Ghanem King Abdullah University of Science and Technology (KAUST)

DOI:

https://doi.org/10.1609/aaai.v36i6.20545

Keywords:

Machine Learning (ML), Computer Vision (CV)

Abstract

Deep neural networks are vulnerable to small input perturbations known as adversarial attacks. Inspired by the fact that these adversaries are constructed by iteratively minimizing the confidence of a network for the true class label, we propose the anti-adversary layer, aimed at countering this effect. In particular, our layer generates an input perturbation in the opposite direction of the adversarial one and feeds the classifier a perturbed version of the input. Our approach is training-free and theoretically supported. We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models and conduct large-scale experiments from black-box to adaptive attacks on CIFAR10, CIFAR100, and ImageNet. Our layer significantly enhances model robustness while coming at no cost on clean accuracy.

Downloads

Published

2022-06-28

How to Cite

Alfarra, M., Perez, J. C., Thabet, A., Bibi, A., Torr, P. H., & Ghanem, B. (2022). Combating Adversaries with Anti-adversaries. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 5992-6000. https://doi.org/10.1609/aaai.v36i6.20545

Issue

Section

AAAI Technical Track on Machine Learning I