On Feasibility of Intent Obfuscating Attacks
DOI:
https://doi.org/10.1609/aies.v7i1.31685Abstract
Intent obfuscation is a common tactic in adversarial situations, enabling the attacker to both manipulate the target system and avoid culpability. Surprisingly, it has rarely been implemented in adversarial attacks on machine learning systems. We are the first to propose using intent obfuscation to generate adversarial examples for object detectors: by perturbing another non-overlapping object to disrupt the target object, the attacker hides their intended target. We conduct a randomized experiment on 5 prominent detectors---YOLOv3, SSD, RetinaNet, Faster R-CNN, and Cascade R-CNN---using both targeted and untargeted attacks and achieve success on all models and attacks. We analyze the success factors characterizing intent obfuscating attacks, including target object confidence and perturb object sizes. We then demonstrate that the attacker can exploit these success factors to increase success rates for all models and attacks. Finally, we discuss main takeaways and legal repercussions. If you are reading the AAAI/ACM version, please download the technical appendix on arXiv at https://arxiv.org/abs/2408.02674Downloads
Published
2024-10-16
How to Cite
Li, Z., & Shafto, P. (2024). On Feasibility of Intent Obfuscating Attacks. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 855-866. https://doi.org/10.1609/aies.v7i1.31685
Issue
Section
Full Archival Papers