Efficient Certification of Spatial Robustness

Authors

  • Anian Ruoss ETH Zurich
  • Maximilian Baader ETH Zurich
  • Mislav Balunović ETH Zurich
  • Martin Vechev ETH Zurich

Keywords:

Adversarial Attacks & Robustness

Abstract

Recent work has exposed the vulnerability of computer vision models to vector field attacks. Due to the widespread usage of such models in safety-critical applications, it is crucial to quantify their robustness against such spatial transformations. However, existing work only provides empirical robustness quantification against vector field deformations via adversarial attacks, which lack provable guarantees. In this work, we propose novel convex relaxations, enabling us, for the first time, to provide a certificate of robustness against vector field transformations. Our relaxations are model-agnostic and can be leveraged by a wide range of neural network verifiers. Experiments on various network architectures and different datasets demonstrate the effectiveness and scalability of our method.

Downloads

Published

2021-05-18

How to Cite

Ruoss, A., Baader, M., Balunović, M., & Vechev, M. (2021). Efficient Certification of Spatial Robustness. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2504-2513. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16352

Issue

Section

AAAI Technical Track on Computer Vision II