Efficient Robust Training via Backward Smoothing

Authors

  • Jinghui Chen Penn State University
  • Yu Cheng Microsoft Research
  • Zhe Gan Microsoft
  • Quanquan Gu University of California, Los Angeles
  • Jingjing Liu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v36i6.20571

Keywords:

Machine Learning (ML)

Abstract

Adversarial training is so far the most effective strategy in defending against adversarial examples. However, it suffers from high computational costs due to the iterative adversarial attacks in each training step. Recent studies show that it is possible to achieve fast Adversarial Training by performing a single-step attack with random initialization. However, such an approach still lags behind state-of-the-art adversarial training algorithms on both stability and model robustness. In this work, we develop a new understanding towards Fast Adversarial Training, by viewing random initialization as performing randomized smoothing for better optimization of the inner maximization problem. Following this new perspective, we also propose a new initialization strategy, backward smoothing, to further improve the stability and model robustness over single-step robust training methods. Experiments on multiple benchmarks demonstrate that our method achieves similar model robustness as the original TRADES method while using much less training time (~3x improvement with the same training schedule).

Downloads

Published

2022-06-28

How to Cite

Chen, J., Cheng, Y., Gan, Z., Gu, Q., & Liu, J. (2022). Efficient Robust Training via Backward Smoothing. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6222-6230. https://doi.org/10.1609/aaai.v36i6.20571

Issue

Section

AAAI Technical Track on Machine Learning I