WAT: Improve the Worst-Class Robustness in Adversarial Training

Authors

  • Boqi Li Wuhan University
  • Weiwei Liu Wuhan University

DOI:

https://doi.org/10.1609/aaai.v37i12.26749

Keywords:

General

Abstract

Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial examples. Adversarial training (AT) is a popular and effective strategy to defend against adversarial attacks. Recent works have shown that a robust model well-trained by AT exhibits a remarkable robustness disparity among classes, and propose various methods to obtain consistent robust accuracy across classes. Unfortunately, these methods sacrifice a good deal of the average robust accuracy. Accordingly, this paper proposes a novel framework of worst-class adversarial training and leverages no-regret dynamics to solve this problem. Our goal is to obtain a classifier with great performance on worst-class and sacrifice just a little average robust accuracy at the same time. We then rigorously analyze the theoretical properties of our proposed algorithm, and the generalization error bound in terms of the worst-class robust risk. Furthermore, we propose a measurement to evaluate the proposed method in terms of both the average and worst-class accuracies. Experiments on various datasets and networks show that our proposed method outperforms the state-of-the-art approaches.

Downloads

Published

2023-06-26

How to Cite

Li, B., & Liu, W. (2023). WAT: Improve the Worst-Class Robustness in Adversarial Training. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14982-14990. https://doi.org/10.1609/aaai.v37i12.26749

Issue

Section

AAAI Special Track on Safe and Robust AI