Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations


  • Alex Wong UCLA Vision Lab
  • Mukund Mundhra UCLA Vision Lab
  • Stefano Soatto UCLA Vision Lab



Adversarial Attacks & Robustness


We study the effect of adversarial perturbations of images on the estimates of disparity by deep learning models trained for stereo. We show that imperceptible additive perturbations can significantly alter the disparity map, and correspondingly the perceived geometry of the scene. These perturbations not only affect the specific model they are crafted for, but transfer to models with different architecture, trained with different loss functions. We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust, without sacrificing overall accuracy of the model. This is unlike what has been observed in image classification, where adding the perturbed images to the training set makes the model less vulnerable to adversarial perturbations, but to the detriment of overall accuracy. We test our method using the most recent stereo networks and evaluate their performance on public benchmark datasets.




How to Cite

Wong, A., Mundhra, M., & Soatto, S. (2021). Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 2879-2888.



AAAI Technical Track on Computer Vision III