Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

Authors

  • Alex Wong UCLA Vision Lab
  • Mukund Mundhra UCLA Vision Lab
  • Stefano Soatto UCLA Vision Lab

DOI:

https://doi.org/10.1609/aaai.v35i4.16394

Keywords:

Adversarial Attacks & Robustness

Abstract

We study the effect of adversarial perturbations of images on the estimates of disparity by deep learning models trained for stereo. We show that imperceptible additive perturbations can significantly alter the disparity map, and correspondingly the perceived geometry of the scene. These perturbations not only affect the specific model they are crafted for, but transfer to models with different architecture, trained with different loss functions. We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust, without sacrificing overall accuracy of the model. This is unlike what has been observed in image classification, where adding the perturbed images to the training set makes the model less vulnerable to adversarial perturbations, but to the detriment of overall accuracy. We test our method using the most recent stereo networks and evaluate their performance on public benchmark datasets.

Downloads

Published

2021-05-18

How to Cite

Wong, A., Mundhra, M., & Soatto, S. (2021). Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 2879-2888. https://doi.org/10.1609/aaai.v35i4.16394

Issue

Section

AAAI Technical Track on Computer Vision III