Robust Training of Neural Networks against Bias Field Perturbations

Authors

  • Patrick Henriksen Imperial College London Safe Intelligence
  • Alessio Lomuscio Safe Intelligence

DOI:

https://doi.org/10.1609/aaai.v37i12.26736

Keywords:

General

Abstract

We introduce the problem of training neural networks such that they are robust against a class of smooth intensity perturbations modelled by bias fields. We first develop an approach towards this goal based on a state-of-the-art robust training method utilising Interval Bound Propagation (IBP). We analyse the resulting algorithm and observe that IBP often produces very loose bounds for bias field perturbations, which may be detrimental to training. We then propose an alternative approach based on Symbolic Interval Propagation (SIP), which usually results in significantly tighter bounds than IBP. We present ROBNET, a tool implementing these approaches for bias field robust training. In experiments networks trained with the SIP-based approach achieved up to 31% higher certified robustness while also maintaining a better accuracy than networks trained with the IBP approach.

Downloads

Published

2023-06-26

How to Cite

Henriksen, P., & Lomuscio, A. (2023). Robust Training of Neural Networks against Bias Field Perturbations. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14865-14873. https://doi.org/10.1609/aaai.v37i12.26736

Issue

Section

AAAI Special Track on Safe and Robust AI