First Line of Defense: A Robust First Layer Mitigates Adversarial Attacks
DOI:
https://doi.org/10.1609/aaai.v39i7.32771Abstract
Adversarial training (AT) incurs significant computational overhead, leading to growing interest in designing inherently robust architectures. We demonstrate that a carefully designed first layer of the neural network can serve as an implicit adversarial noise filter (ANF). This filter is created using a combination of large kernel size, increased convolution filters, and a maxpool operation. We show that integrating this filter as the first layer in architectures such as ResNet, VGG, and EfficientNet results in adversarially robust networks. Our approach achieves higher adversarial accuracies than existing natively robust architectures without AT and is competitive with adversarial-trained architectures across a wide range of datasets. Supporting our findings, we show that (a) the decision regions for our method have better margins, (b) the visualized loss surfaces are smoother, (c) the modified peak signal-to-noise ratio (mPSNR) values at the output of the ANF are higher, (d) high-frequency components are more attenuated, and (e) architectures incorporating ANF exhibit better denoising in Gaussian noise compared to baseline architectures.Downloads
Published
2025-04-11
How to Cite
Suresh, J., Nayak, N., & Kalyani, S. (2025). First Line of Defense: A Robust First Layer Mitigates Adversarial Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 7176–7183. https://doi.org/10.1609/aaai.v39i7.32771
Issue
Section
AAAI Technical Track on Computer Vision VI