Regularizing Deep Networks Using Efficient Layerwise Adversarial Training

Authors

  • Swami Sankaranarayanan University of Maryland, College Park
  • Arpit Jain GE Global Research
  • Rama Chellappa University of Maryland, College Park
  • Ser Nam Lim Avitas Systems, GE Global Research

DOI:

https://doi.org/10.1609/aaai.v32i1.11688

Keywords:

Deep Learning, Adversarial Training, Regularization, Classification

Abstract

Adversarial training has been shown to regularize deep neural networks in addition to increasing their robustness to adversarial examples. However, the regularization effect on very deep state of the art networks has not been fully investigated. In this paper, we present a novel approach to regularize deep neural networks by perturbing intermediate layer activations in an efficient manner. We use these perturbations to train very deep models such as ResNets and WideResNets and show improvement in performance across datasets of different sizes such as CIFAR-10, CIFAR-100 and ImageNet. Our ablative experiments show that the proposed approach not only provides stronger regularization compared to Dropout but also improves adversarial robustness comparable to traditional adversarial training approaches.

Downloads

Published

2018-04-29

How to Cite

Sankaranarayanan, S., Jain, A., Chellappa, R., & Lim, S. N. (2018). Regularizing Deep Networks Using Efficient Layerwise Adversarial Training. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11688