Poisoning-Based Backdoor Attacks in Computer Vision

Authors

  • Yiming Li Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v37i13.26921

Keywords:

Backdoor Attack, Backdoor Learning, Data Poisoning, Trustworthy ML, AI Security

Abstract

Recent studies demonstrated that the training process of deep neural networks (DNNs) is vulnerable to backdoor attacks if third-party training resources (e.g., samples) are adopted. Specifically, the adversaries intend to embed hidden backdoors into DNNs, where the backdoor can be activated by pre-defined trigger patterns and leading malicious model predictions. My dissertation focuses on poisoning-based backdoor attacks in computer vision. Firstly, I study and propose more stealthy and effective attacks against image classification tasks in both physical and digital spaces. Secondly, I reveal the backdoor threats in visual object tracking, which is representative of critical video-related tasks. Thirdly, I explore how to exploit backdoor attacks as watermark techniques for positive purposes. I design a Python toolbox (i.e., BackdoorBox) that implements representative and advanced backdoor attacks and defenses under a unified and flexible framework, based on which to provide a comprehensive benchmark of existing methods at the end.

Downloads

Published

2023-09-06

How to Cite

Li, Y. (2023). Poisoning-Based Backdoor Attacks in Computer Vision. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16121-16122. https://doi.org/10.1609/aaai.v37i13.26921