Robust Adversarial Objects against Deep Learning Models

Authors

  • Tzungyu Tsai National Tsing Hua University
  • Kaichen Yang University of Florida
  • Tsung-Yi Ho National Tsing Hua University
  • Yier Jin University of Florida

DOI:

https://doi.org/10.1609/aaai.v34i01.5443

Abstract

Previous work has shown that Deep Neural Networks (DNNs), including those currently in use in many fields, are extremely vulnerable to maliciously crafted inputs, known as adversarial examples. Despite extensive and thorough research of adversarial examples in many areas, adversarial 3D data, such as point clouds, remain comparatively unexplored. The study of adversarial 3D data is crucial considering its impact in real-life, high-stakes scenarios including autonomous driving. In this paper, we propose a novel adversarial attack against PointNet++, a deep neural network that performs classification and segmentation tasks using features learned directly from raw 3D points. In comparison to existing works, our attack generates not only adversarial point clouds, but also robust adversarial objects that in turn generate adversarial point clouds when sampled both in simulation and after construction in real world. We also demonstrate that our objects can bypass existing defense mechanisms designed especially against adversarial 3D data.

Downloads

Published

2020-04-03

How to Cite

Tsai, T., Yang, K., Ho, T.-Y., & Jin, Y. (2020). Robust Adversarial Objects against Deep Learning Models. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 954-962. https://doi.org/10.1609/aaai.v34i01.5443

Issue

Section

AAAI Technical Track: Applications