Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks

Authors

  • Huimin Zeng Technical University of Munich
  • Chen Zhu University of Maryland, College Park
  • Tom Goldstein University of Maryland, College Park
  • Furong Huang University of Maryland, College Park

Keywords:

Adversarial Learning & Robustness

Abstract

Adversarial Training is proved to be an efficient method to defend against adversarial examples, being one of the few defenses that withstand strong attacks. However, traditional defense mechanisms assume a uniform attack over the examples according to the underlying data distribution, which is apparently unrealistic as the attacker could choose to focus on more vulnerable examples. We present a weighted minimax risk optimization that defends against non-uniform attacks, achieving robustness against adversarial examples under perturbed test data distributions. Our modified risk considers importance weights of different adversarial examples and focuses adaptively on harder examples that are wrongly classified or at higher risk of being classified incorrectly. The designed risk allows the training process to learn a strong defense through optimizing the importance weights. The experiments show that our model significantly improves state-of-the-art adversarial accuracy under non-uniform attacks without a significant drop under uniform attacks.

Downloads

Published

2021-05-18

How to Cite

Zeng, H., Zhu, C., Goldstein, T., & Huang, F. (2021). Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10815-10823. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17292

Issue

Section

AAAI Technical Track on Machine Learning V