Adversarial Dropout for Supervised and Semi-Supervised Learning


  • Sungrae Park KAIST
  • JunKeon Park KAIST
  • Su-Jin Shin KAIST
  • Il-Chul Moon KAIST



adversarial training, regularization, deep learning


Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between 1) the training supervision and 2) the outputs from the network with the dropouts. The identified adversarial dropouts are used to automatically reconfigure the neural network in the training process, and we demonstrated that the simultaneous training on the original and the reconfigured network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST, SVHN, and CIFAR-10. We analyzed the trained model to find the performance improvement reasons. We found that adversarial dropout increases the sparsity of neural networks more than the standard dropout. Finally, we also proved that adversarial dropout is a regularization term with a rank-valued hyper-parameter that is different from a continuous-valued parameter to specify the strength of the regularization.




How to Cite

Park, S., Park, J., Shin, S.-J., & Moon, I.-C. (2018). Adversarial Dropout for Supervised and Semi-Supervised Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).