MangaGAN: Unpaired Photo-to-Manga Translation Based on The Methodology of Manga Drawing

Authors

  • Hao Su State Key Lab of VR Technology and System, School of Computer Science and Engineering, Beihang University
  • Jianwei Niu State Key Lab of VR Technology and System, School of Computer Science and Engineering, Beihang University Industrial Technology Research Institute, School of Information Engineering, Zhengzhou University Hangzhou Innovation Institute, Beihang University
  • Xuefeng Liu State Key Lab of VR Technology and System, School of Computer Science and Engineering, Beihang University
  • Qingfeng Li State Key Lab of VR Technology and System, School of Computer Science and Engineering, Beihang University
  • Jiahe Cui State Key Lab of VR Technology and System, School of Computer Science and Engineering, Beihang University
  • Ji Wan State Key Lab of VR Technology and System, School of Computer Science and Engineering, Beihang University

DOI:

https://doi.org/10.1609/aaai.v35i3.16364

Keywords:

Computational Photography, Image & Video Synthesis

Abstract

Manga is a world popular comic form originated in Japan, which typically employs black-and-white stroke lines and geometric exaggeration to describe humans' appearances, poses, and actions. In this paper, we propose MangaGAN, the first method based on Generative Adversarial Network (GAN) for unpaired photo-to-manga translation. Inspired by the drawing process of experienced manga artists, MangaGAN generates geometric features and converts each facial region into the manga domain with a tailored multi-GANs architecture. For training MangaGAN, we collect a new data-set from a popular manga work with extensive features. To produce high-quality manga faces, we propose a structural smoothing loss to smooth stroke-lines and avoid noisy pixels, and a similarity preserving module to improve the similarity between domains of photo and manga. Extensive experiments show that MangaGAN can produce high-quality manga faces preserving both the facial similarity and manga style, and outperforms other reference methods.

Downloads

Published

2021-05-18

How to Cite

Su, H., Niu, J., Liu, X., Li, Q., Cui, J., & Wan, J. (2021). MangaGAN: Unpaired Photo-to-Manga Translation Based on The Methodology of Manga Drawing. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2611-2619. https://doi.org/10.1609/aaai.v35i3.16364

Issue

Section

AAAI Technical Track on Computer Vision II