TiGAN: Text-Based Interactive Image Generation and Manipulation


  • Yufan Zhou University at Buffalo
  • Ruiyi Zhang Adobe Research
  • Jiuxiang Gu Adobe Research
  • Chris Tensmeyer Adobe Research
  • Tong Yu Adobe Research
  • Changyou Chen University at Buffalo
  • Jinhui Xu University at Buffalo
  • Tong Sun Adobe Research




Computer Vision (CV), Speech & Natural Language Processing (SNLP), Humans And AI (HAI)


Using natural-language feedback to guide image generation and manipulation can greatly lower the required efforts and skills. This topic has received increased attention in recent years through refinement of Generative Adversarial Networks (GANs); however, most existing works are limited to single-round interaction, which is not reflective of real world interactive image editing workflows. Furthermore, previous works dealing with multi-round scenarios are limited to predefined feedback sequences, which is also impractical. In this paper, we propose a novel framework for Text-based Interactive image generation and manipulation (TiGAN) that responds to users' natural-language feedback. TiGAN utilizes the powerful pre-trained CLIP model to understand users' natural-language feedback and exploits contrastive learning for a better text-to-image mapping. To maintain the image consistency during interactions, TiGAN generates intermediate feature vectors aligned with the feedback and selectively feeds these vectors to our proposed generative model. Empirical results on several datasets show that TiGAN improves both interaction efficiency and image quality while better avoids undesirable image manipulation during interactions.




How to Cite

Zhou, Y., Zhang, R., Gu, J., Tensmeyer, C., Yu, T., Chen, C., Xu, J., & Sun, T. (2022). TiGAN: Text-Based Interactive Image Generation and Manipulation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3580-3588. https://doi.org/10.1609/aaai.v36i3.20270



AAAI Technical Track on Computer Vision III