Adversarial Dropout for Supervised and Semi-Supervised Learning

Authors

  • Sungrae Park KAIST
  • JunKeon Park KAIST
  • Su-Jin Shin KAIST
  • Il-Chul Moon KAIST

DOI:

https://doi.org/10.1609/aaai.v32i1.11634

Keywords:

adversarial training, regularization, deep learning

Abstract

Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between 1) the training supervision and 2) the outputs from the network with the dropouts. The identified adversarial dropouts are used to automatically reconfigure the neural network in the training process, and we demonstrated that the simultaneous training on the original and the reconfigured network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST, SVHN, and CIFAR-10. We analyzed the trained model to find the performance improvement reasons. We found that adversarial dropout increases the sparsity of neural networks more than the standard dropout. Finally, we also proved that adversarial dropout is a regularization term with a rank-valued hyper-parameter that is different from a continuous-valued parameter to specify the strength of the regularization.

Downloads

Published

2018-04-29

How to Cite

Park, S., Park, J., Shin, S.-J., & Moon, I.-C. (2018). Adversarial Dropout for Supervised and Semi-Supervised Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11634