Breaking the Stealth-Potency Trade-off in Clean-Image Backdoors with Generative Trigger Optimization

Authors

  • Binyan Xu The Chinese University of Hong Kong
  • Fan Yang The Chinese University of Hong Kong
  • Di Tang Sun Yat-sen University
  • Xilin Dai Zhejiang University
  • Kehuan Zhang The Chinese University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v40i32.39935

Abstract

Clean-image backdoor attacks, which use only label manipulation in training datasets to compromise deep neural networks, pose a significant threat to security-critical applications. A critical flaw in existing methods is that the poison rate required for a successful attack induces a proportional, and thus noticeable, drop in Clean Accuracy (CA), undermining their stealthiness. This paper presents a new paradigm for clean-image attacks that minimizes this accuracy degradation by optimizing the trigger itself. We introduce Generative Clean-Image Backdoors (GCB), a framework that uses a conditional InfoGAN to identify naturally occurring image features that can serve as potent and stealthy triggers. By ensuring these triggers are easily separable from benign task-related features, GCB enables a victim model to learn the backdoor from an extremely small set of poisoned examples, resulting in a CA drop of less than 1%. Our experiments demonstrate GCB's remarkable versatility, successfully adapting to six datasets, five architectures, and four tasks, including the first demonstration of clean-image backdoors in regression and segmentation. GCB also exhibits resilience against most of the existing backdoor defenses.

Published

2026-03-14

How to Cite

Xu, B., Yang, F., Tang, D., Dai, X., & Zhang, K. (2026). Breaking the Stealth-Potency Trade-off in Clean-Image Backdoors with Generative Trigger Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 27197-27205. https://doi.org/10.1609/aaai.v40i32.39935

Issue

Section

AAAI Technical Track on Machine Learning IX