Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities

Authors

  • Subhajit Chaudhury The University of Tokyo

DOI:

https://doi.org/10.1609/aaai.v34i10.7129

Abstract

Neural networks have contributed to tremendous progress in the domains of computer vision, speech processing, and other real-world applications. However, recent studies have shown that these state-of-the-art models can be easily compromised by adding small imperceptible perturbations. My thesis summary frames the problem of adversarial robustness as an equivalent problem of learning suitable features that leads to good generalization in neural networks. This is motivated from learning in humans which is not trivially fooled by such perturbations due to robust feature learning which shows good out-of-sample generalization.

Downloads

Published

2020-04-03

How to Cite

Chaudhury, S. (2020). Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities. Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13714-13715. https://doi.org/10.1609/aaai.v34i10.7129

Issue

Section

Doctoral Consortium Track