Distilling Portable Generative Adversarial Networks for Image Translation


  • Hanting Chen Peking University
  • Yunhe Wang Huawei Noah’s Ark Lab
  • Han Shu Huawei Noah’s Ark Lab
  • Changyuan Wen Huawei Consumer Business Group
  • Chunjing Xu Huawei Noah’s Ark Lab
  • Boxin Shi Peking University
  • Chao Xu Peking University
  • Chang Xu The University of Sydney




Despite Generative Adversarial Networks (GANs) have been widely used in various image-to-image translation tasks, they can be hardly applied on mobile devices due to their heavy computation and storage cost. Traditional network compression methods focus on visually recognition tasks, but never deal with generation tasks. Inspired by knowledge distillation, a student generator of fewer parameters is trained by inheriting the low-level and high-level information from the original heavy teacher generator. To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators. An adversarial learning process is therefore established to optimize student generator and student discriminator. Qualitative and quantitative analysis by conducting experiments on benchmark datasets demonstrate that the proposed method can learn portable generative models with strong performance.




How to Cite

Chen, H., Wang, Y., Shu, H., Wen, C., Xu, C., Shi, B., Xu, C., & Xu, C. (2020). Distilling Portable Generative Adversarial Networks for Image Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3585-3592. https://doi.org/10.1609/aaai.v34i04.5765



AAAI Technical Track: Machine Learning