Compressing Image-to-Image Translation GANs Using Local Density Structures on Their Learned Manifold

Authors

  • Alireza Ganjdanesh University of Maryland, College Park
  • Shangqian Gao University of Pittsburgh
  • Hirad Alipanah University of Pittsburgh
  • Heng Huang University of Maryland, College Park

DOI:

https://doi.org/10.1609/aaai.v38i11.29100

Keywords:

ML: Learning on the Edge & Model Compression, ML: Deep Generative Models & Autoencoders, ML: Deep Neural Architectures and Foundation Models

Abstract

Generative Adversarial Networks (GANs) have shown remarkable success in modeling complex data distributions for image-to-image translation. Still, their high computational demands prohibit their deployment in practical scenarios like edge devices. Existing GAN compression methods mainly rely on knowledge distillation or convolutional classifiers' pruning techniques. Thus, they neglect the critical characteristic of GANs: their local density structure over their learned manifold. Accordingly, we approach GAN compression from a new perspective by explicitly encouraging the pruned model to preserve the density structure of the original parameter-heavy model on its learned manifold. We facilitate this objective for the pruned model by partitioning the learned manifold of the original generator into local neighborhoods around its generated samples. Then, we propose a novel pruning objective to regularize the pruned model to preserve the local density structure over each neighborhood, resembling the kernel density estimation method. Also, we develop a collaborative pruning scheme in which the discriminator and generator are pruned by two pruning agents. We design the agents to capture interactions between the generator and discriminator by exchanging their peer's feedback when determining corresponding models' architectures. Thanks to such a design, our pruning method can efficiently find performant sub-networks and can maintain the balance between the generator and discriminator more effectively compared to baselines during pruning, thereby showing more stable pruning dynamics. Our experiments on image translation GAN models, Pix2Pix and CycleGAN, with various benchmark datasets and architectures demonstrate our method's effectiveness.

Published

2024-03-24

How to Cite

Ganjdanesh, A., Gao, S., Alipanah, H., & Huang, H. (2024). Compressing Image-to-Image Translation GANs Using Local Density Structures on Their Learned Manifold. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12118-12126. https://doi.org/10.1609/aaai.v38i11.29100

Issue

Section

AAAI Technical Track on Machine Learning II