Generating Risky Samples with Conformity Constraints via Diffusion Models

Authors

  • Han Yu Tsinghua University
  • Hao Zou Tsinghua University
  • Xingxuan Zhang Tsinghua University
  • Zhengyi Wang Tsinghua University
  • Yue He Renmin University of China
  • Kehan Li Tsinghua University
  • Peng Cui Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v40i33.40017

Abstract

Although neural networks achieve promising performance in many tasks, they may still fail when encountering some examples and bring about risks to applications. To discover risky samples, previous literature attempts to search for patterns of risky samples within existing datasets or inject perturbation into them. Yet in this way the diversity of risky samples is limited by the coverage of existing datasets. To overcome this limitation, recent works adopt diffusion models to produce new risky samples beyond the coverage of existing datasets. However, these methods struggle in the conformity between generated samples and expected categories, which could introduce label noise and severely limit their effectiveness in applications. To address this issue, we propose RiskyDiff that incorporates the embeddings of both texts and images as implicit constraints of category conformity. We also design a conformity score to further explicitly strengthen the category conformity, as well as introduce the mechanisms of embedding screening and risky gradient guidance to boost the risk of generated samples. Extensive experiments reveal that RiskyDiff greatly outperforms existing methods in terms of the degree of risk, generation quality, and conformity with conditioned categories. We also empirically show the generalization ability of the models can be enhanced by augmenting training data with generated samples of high conformity.

Downloads

Published

2026-03-14

How to Cite

Yu, H., Zou, H., Zhang, X., Wang, Z., He, Y., Li, K., & Cui, P. (2026). Generating Risky Samples with Conformity Constraints via Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(33), 27934–27942. https://doi.org/10.1609/aaai.v40i33.40017

Issue

Section

AAAI Technical Track on Machine Learning X