Variational Autoencoder with Implicit Optimal Priors

Authors

  • Hiroshi Takahashi NTT
  • Tomoharu Iwata NTT
  • Yuki Yamanaka NTT
  • Masanori Yamada NTT
  • Satoshi Yagi NTT

DOI:

https://doi.org/10.1609/aaai.v33i01.33015066

Abstract

The variational autoencoder (VAE) is a powerful generative model that can estimate the probability of a data point by using latent variables. In the VAE, the posterior of the latent variable given the data point is regularized by the prior of the latent variable using Kullback Leibler (KL) divergence. Although the standard Gaussian distribution is usually used for the prior, this simple prior incurs over-regularization. As a sophisticated prior, the aggregated posterior has been introduced, which is the expectation of the posterior over the data distribution. This prior is optimal for the VAE in terms of maximizing the training objective function. However, KL divergence with the aggregated posterior cannot be calculated in a closed form, which prevents us from using this optimal prior. With the proposed method, we introduce the density ratio trick to estimate this KL divergence without modeling the aggregated posterior explicitly. Since the density ratio trick does not work well in high dimensions, we rewrite this KL divergence that contains the high-dimensional density ratio into the sum of the analytically calculable term and the lowdimensional density ratio term, to which the density ratio trick is applied. Experiments on various datasets show that the VAE with this implicit optimal prior achieves high density estimation performance.

Downloads

Published

2019-07-17

How to Cite

Takahashi, H., Iwata, T., Yamanaka, Y., Yamada, M., & Yagi, S. (2019). Variational Autoencoder with Implicit Optimal Priors. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5066-5073. https://doi.org/10.1609/aaai.v33i01.33015066

Issue

Section

AAAI Technical Track: Machine Learning