PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks

Authors

  • Chen Feng Department of Electronic and Electrical Engineering, University College London
  • Ziquan Liu School of Electronic Engineering and Computer Science, Queen Mary University of London
  • Zhuo Zhi Department of Electronic and Electrical Engineering, University College London
  • Ilija Bogunovic Department of Electronic and Electrical Engineering, University College London
  • Carsten Gerner-Beuerle Faculty of Laws, University College London
  • Miguel Rodrigues AI Centre, Department of Electronic and Electrical Engineering, University College London

DOI:

https://doi.org/10.1609/aaai.v39i3.32300

Abstract

It is widely known that state-of-the-art machine learning models, including vision and language models, can be seriously compromised by adversarial perturbations. It is therefore increasingly relevant to develop capabilities to certify their performance in the presence of the most effective adversarial attacks. Our paper offers a new approach to certify the performance of machine learning models in the presence of adversarial attacks with population level risk guarantees. In particular, we introduce the notion of (α,ζ)-safe machine learning model. We propose a hypothesis testing procedure, based on the availability of a calibration set, to derive statistical guarantees providing that the probability of declaring that the adversarial (population) risk of a machine learning model is less than α (i.e. the model is safe), while the model is in fact unsafe (i.e. the model adversarial population risk is higher than α), is less than ζ. We also propose Bayesian optimization algorithms to determine efficiently whether a machine learning model is (α,ζ)-safe in the presence of an adversarial attack, along with statistical guarantees. We apply our framework to a range of machine learning models - including various sizes of vision Transformer (ViT) and ResNet models - impaired by a variety of adversarial attacks, such as PGDAttack, MomentumAttack, GenAttack and BanditAttack, to illustrate the operation of our approach. Importantly, we show that ViT's are generally more robust to adversarial attacks than ResNets, and large models are generally more robust than smaller models. Our approach goes beyond existing empirical adversarial risk-based certification guarantees. It formulates rigorous (and provable) performance guarantees that can be used to satisfy regulatory requirements mandating the use of state-of-the-art technical tools.

Published

2025-04-11

How to Cite

Feng, C., Liu, Z., Zhi, Z., Bogunovic, I., Gerner-Beuerle, C., & Rodrigues, M. (2025). PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 2933–2941. https://doi.org/10.1609/aaai.v39i3.32300

Issue

Section

AAAI Technical Track on Computer Vision II