ORES: Open-Vocabulary Responsible Visual Synthesis

Authors

  • Minheng Ni Microsoft Research Asia
  • Chenfei Wu Microsoft Research Asia
  • Xiaodong Wang Microsoft Research Asia
  • Shengming Yin Microsoft Research Asia
  • Lijuan Wang Microsoft Azure AI
  • Zicheng Liu Microsoft Azure AI
  • Nan Duan Microsoft Research Asia

DOI:

https://doi.org/10.1609/aaai.v38i19.30144

Keywords:

General

Abstract

Avoiding synthesizing specific visual concepts is an essential challenge in responsible visual synthesis. However, the visual concept that needs to be avoided for responsible visual synthesis tends to be diverse, depending on the region, context, and usage scenarios. In this work, we formalize a new task, Open-vocabulary Responsible Visual Synthesis (ORES), where the synthesis model is able to avoid forbidden visual concepts while allowing users to input any desired content. To address this problem, we present a Two-stage Intervention (TIN) framework. By introducing 1) rewriting with learnable instruction through a large-scale language model (LLM) and 2) synthesizing with prompt intervention on a diffusion synthesis model, it can effectively synthesize images avoiding any concepts but following the user's query as much as possible. To evaluate on ORES, we provide a publicly available dataset, baseline models, and benchmark. Experimental results demonstrate the effectiveness of our method in reducing risks of image generation. Our work highlights the potential of LLMs in responsible visual synthesis. Our code and dataset is public available in https://github.com/kodenii/ORES.

Published

2024-03-24

How to Cite

Ni, M., Wu, C., Wang, X., Yin, S., Wang, L., Liu, Z., & Duan, N. (2024). ORES: Open-Vocabulary Responsible Visual Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21473-21481. https://doi.org/10.1609/aaai.v38i19.30144

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track