Improving Generative Moment Matching Networks with Distribution Partition

Authors

  • Yong Ren Tsinghua University
  • Yucen Luo Tsinghua University
  • Jun Zhu Tsinghua University

Keywords:

Neural Generative Models & Autoencoders

Abstract

Generative moment matching networks (GMMN) present a theoretically sound approach to learning deep generative mod-els. However, such methods are typically limited by the high sample complexity, thereby impractical in generating complex data. In this paper, we present a new strategy to train GMMN with a low sample complexity while retaining the theoretical soundness. Our method introduces some auxiliary variables, whose values are provided by a pre-trained model such as an encoder network in practice. Conditioned on these variables, we partition the distribution into a set of conditional distributions, which can be effectively matched with a low sample complexity. We instantiate this strategy by presenting an amortized network called GMMN-DP with shared auxiliary variable information for the data generation task, as well as developing an efficient stochastic training algorithm.The experimental results show that GMMN-DP can generate complex samples on datasets such as CelebA and CIFAR-10, where the vanilla GMMN fails.

Downloads

Published

2021-05-18

How to Cite

Ren, Y., Luo, Y., & Zhu, J. (2021). Improving Generative Moment Matching Networks with Distribution Partition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9403-9410. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17133

Issue

Section

AAAI Technical Track on Machine Learning IV