T2I-RiskyPrompt: A Benchmark for Safety Evaluation, Attack, and Defense on Text-to-Image Model

Authors

  • Chenyu Zhang School of New Media and Communication, Tianjin University, Tianjin, China
  • Tairen Zhang Medical School of Tianjin University, Tianjin, China
  • Lanjun Wang School of New Media and Communication, Tianjin University, Tianjin, China
  • Ruidong Chen School of Electrical and Information Engineering, Tianjin University, Tianjin, China
  • Wenhui Li School of Electrical and Information Engineering, Tianjin University, Tianjin, China
  • Anan Liu School of Electrical and Information Engineering, Tianjin University, Tianjin, China

DOI:

https://doi.org/10.1609/aaai.v40i42.40920

Abstract

Using risky text prompts, such as pornography and violent prompts, to test the safety of text-to-image (T2I) models is a critical task. However, existing risky prompt datasets are limited in three key areas: 1) limited risky categories, 2) coarse-grained annotation, and 3) low effectiveness. To address these limitations, we introduce T2I-RiskyPrompt, a comprehensive benchmark designed for evaluating safety-related tasks in T2I models. Specifically, we first develop a hierarchical risk taxonomy, which consists of 6 primary categories and 14 fine-grained subcategories. Building upon this taxonomy, we construct a pipeline to collect and annotate risky prompts. Finally, we obtain 6,432 effective risky prompts, where each prompt is annotated with both hierarchical category labels and detailed risk reasons. Moreover, to facilitate the evaluation, we propose a reason-driven risky image detection method that explicitly aligns the MLLM with safety annotations. Based on T2I-RiskyPrompt, we conduct a comprehensive evaluation of eight T2I models, nine defense methods, five safety filters, and five attack strategies, offering nine key insights into the strengths and limitations of T2I model safety. Finally, we discuss potential applications of T2I-RiskyPrompt across various research fields.

Downloads

Published

2026-03-14

How to Cite

Zhang, C., Zhang, T., Wang, L., Chen, R., Li, W., & Liu, A. (2026). T2I-RiskyPrompt: A Benchmark for Safety Evaluation, Attack, and Defense on Text-to-Image Model. Proceedings of the AAAI Conference on Artificial Intelligence, 40(42), 36039–36047. https://doi.org/10.1609/aaai.v40i42.40920

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI