Robust-by-Design Classification via Unitary-Gradient Neural Networks
DOI:
https://doi.org/10.1609/aaai.v37i12.26721Keywords:
GeneralAbstract
The use of neural networks in safety-critical systems requires safe and robust models, due to the existence of adversarial attacks. Knowing the minimal adversarial perturbation of any input x, or, equivalently, knowing the distance of x from the classification boundary, allows evaluating the classification robustness, providing certifiable predictions. Unfortunately, state-of-the-art techniques for computing such a distance are computationally expensive and hence not suited for online applications. This work proposes a novel family of classifiers, namely Signed Distance Classifiers (SDCs), that, from a theoretical perspective, directly output the exact distance of x from the classification boundary, rather than a probability score (e.g., SoftMax). SDCs represent a family of robust-by-design classifiers. To practically address the theoretical requirements of an SDC, a novel network architecture named Unitary-Gradient Neural Network is presented. Experimental results show that the proposed architecture approximates a signed distance classifier, hence allowing an online certifiable classification of x at the cost of a single inference.Downloads
Published
2023-06-26
How to Cite
Brau, F., Rossolini, G., Biondi, A., & Buttazzo, G. (2023). Robust-by-Design Classification via Unitary-Gradient Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14729-14737. https://doi.org/10.1609/aaai.v37i12.26721
Issue
Section
AAAI Special Track on Safe and Robust AI