Understanding Catastrophic Overfitting in Single-step Adversarial Training

Authors

  • Hoki Kim Seoul National University
  • Woojin Lee Seoul National University
  • Jaewook Lee Seoul National University

DOI:

https://doi.org/10.1609/aaai.v35i9.16989

Keywords:

Adversarial Learning & Robustness

Abstract

Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed. This is a phenomenon in which, during single-step adversarial training, the robust accuracy against projected gradient descent (PGD) suddenly decreases to 0% after a few epochs, whereas the robust accuracy against fast gradient sign method (FGSM) increases to 100%. In this paper, we demonstrate that catastrophic overfitting is very closely related to the characteristic of single-step adversarial training which uses only adversarial examples with the maximum perturbation, and not all adversarial examples in the adversarial direction, which leads to decision boundary distortion and a highly curved loss surface. Based on this observation, we propose a simple method that not only prevents catastrophic overfitting, but also overrides the belief that it is difficult to prevent multi-step adversarial attacks with single-step adversarial training.

Downloads

Published

2021-05-18

How to Cite

Kim, H., Lee, W., & Lee, J. (2021). Understanding Catastrophic Overfitting in Single-step Adversarial Training. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8119-8127. https://doi.org/10.1609/aaai.v35i9.16989

Issue

Section

AAAI Technical Track on Machine Learning II