Self-Progressing Robust Training

Authors

  • Minhao Cheng UCLA IBM Research
  • Pin-Yu Chen IBM Research
  • Sijia Liu Michigan State University
  • Shiyu Chang IBM Research
  • Cho-Jui Hsieh UCLA
  • Payel Das IBM Research

Keywords:

(Deep) Neural Network Algorithms

Abstract

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy machine learning systems. Current robust training methods such as adversarial training explicitly uses an ``attack'' (e.g., l_infty-norm bounded perturbation) to generate adversarial examples during model training for improving adversarial robustness. In this paper, we take a different perspective and propose a new framework SPROUT, self-progressing robust training. During model training, SPROUT progressively adjusts training label distribution via our proposed parametrized label smoothing technique, making training free of attack generation and more scalable. We also motivate SPROUT using a general formulation based on vicinity risk minimization, which includes many robust training methods as special cases. Compared with state-of-the-art adversarial training methods (PGD-l_infty and TRADES) under l_infty-norm bounded attacks and various invariance tests, SPROUT consistently attains superior performance and is more scalable to large neural networks. Our results shed new light on scalable, effective and attack-independent robust training methods.

Downloads

Published

2021-05-18

How to Cite

Cheng, M., Chen, P.-Y., Liu, S., Chang, S., Hsieh, C.-J., & Das, P. (2021). Self-Progressing Robust Training. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7107-7115. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16874

Issue

Section

AAAI Technical Track on Machine Learning I