Target-Free Text-Guided Image Manipulation

Authors

  • Wan-Cyuan Fan National Taiwan University
  • Cheng-Fu Yang University of California, Los Angeles
  • Chiao-An Yang Purdue University
  • Yu-Chiang Frank Wang National Taiwan University, NVIDIA

DOI:

https://doi.org/10.1609/aaai.v37i1.25134

Keywords:

CV: Language and Vision, CV: Applications, ML: Deep Generative Models & Autoencoders

Abstract

We tackle the problem of target-free text-guided image manipulation, which requires one to modify the input reference image based on the given text instruction, while no ground truth target image is observed during training. To address this challenging task, we propose a Cyclic-Manipulation GAN (cManiGAN) in this paper, which is able to realize where and how to edit the image regions of interest. Specifically, the image editor in cManiGAN learns to identify and complete the input image, while cross-modal interpreter and reasoner are deployed to verify the semantic correctness of the output image based on the input instruction. While the former utilizes factual/counterfactual description learning for authenticating the image semantics, the latter predicts the "undo" instruction and provides pixel-level supervision for the training of cManiGAN. With the above operational cycle-consistency, our cManiGAN can be trained in the above weakly supervised setting. We conduct extensive experiments on the datasets of CLEVR and COCO datasets, and the effectiveness and generalizability of our proposed method can be successfully verified. Project page: sites.google.com/view/wancyuanfan/projects/cmanigan.

Downloads

Published

2023-06-26

How to Cite

Fan, W.-C., Yang, C.-F., Yang, C.-A., & Wang, Y.-C. F. (2023). Target-Free Text-Guided Image Manipulation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 588-596. https://doi.org/10.1609/aaai.v37i1.25134

Issue

Section

AAAI Technical Track on Computer Vision I