Value-Aligned Prompt Moderation via Zero-Shot Agentic Rewriting for Safe Image Generation

Authors

  • Xin Zhao Institute of Information Engineering, Chinese Academy of Sciences State Key Laboratory of Cyberspace Security Defense School of Cyber Security,University of Chinese Academy of Sciences
  • Xiaojun Chen Institute of Information Engineering, Chinese Academy of Sciences State Key Laboratory of Cyberspace Security Defense School of Cyber Security,University of Chinese Academy of Sciences
  • Bingshan Liu Institute of Information Engineering, Chinese Academy of Sciences State Key Laboratory of Cyberspace Security Defense School of Cyber Security,University of Chinese Academy of Sciences
  • Zeyao Liu Institute of Information Engineering, Chinese Academy of Sciences State Key Laboratory of Cyberspace Security Defense School of Cyber Security,University of Chinese Academy of Sciences
  • Zhendong Zhao Institute of Information Engineering, Chinese Academy of Sciences State Key Laboratory of Cyberspace Security Defense School of Cyber Security,University of Chinese Academy of Sciences
  • Xiaoyan Gu Institute of Information Engineering, Chinese Academy of Sciences State Key Laboratory of Cyberspace Security Defense School of Cyber Security,University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v40i44.41152

Abstract

Generative vision-language models like Stable Diffusion demonstrate remarkable capabilities in creative media synthesis, but they also pose substantial risks of producing unsafe, offensive, or culturally inappropriate content when prompted adversarially. Current defenses struggle to align outputs with human values without sacrificing generation quality or incurring high costs. To address these challenges, we introduce VALOR (Value-Aligned LLM-Overseen Rewriter), a modular, zero-shot agentic framework for safer and more helpful text-to-image generation. VALOR integrates layered prompt analysis with human-aligned value reasoning: a multi-level NSFW detector filters lexical and semantic risks; a cultural value alignment module identifies violations of social norms, legality, and representational ethics; and an intention disambiguator detects subtle or indirect unsafe implications. When unsafe content is detected, prompts are selectively rewritten by a large language model under dynamic, role-specific instructions designed to preserve user intent while enforcing alignment. If the generated image still fails a safety check, VALOR optionally performs a stylistic regeneration to steer the output toward a safer visual domain without altering core semantics. Experiments across adversarial, ambiguous, and value-sensitive prompts show that VALOR significantly reduces unsafe outputs by up to 100.00% while preserving prompt usefulness and creativity. These results highlight VALOR as a scalable and effective approach for deploying safe, aligned, and helpful image generation systems in open-world settings.

Downloads

Published

2026-03-14

How to Cite

Zhao, X., Chen, X., Liu, B., Liu, Z., Zhao, Z., & Gu, X. (2026). Value-Aligned Prompt Moderation via Zero-Shot Agentic Rewriting for Safe Image Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(44), 38137–38145. https://doi.org/10.1609/aaai.v40i44.41152

Issue

Section

AAAI Special Track on AI Alignment