Denoising Criterion for Variational Auto-Encoding Framework

Authors

  • Daniel Im Im University of Montreal
  • Sungjin Ahn University of Montreal
  • Roland Memisevic University of Montreal
  • Yoshua Bengio University of Montreal

DOI:

https://doi.org/10.1609/aaai.v31i1.10777

Keywords:

Variational Auto-encoder, Deep generative models

Abstract

Denoising autoencoders (DAE) are trained to reconstruct their clean inputs with noise injected at the input level, while variational autoencoders (VAE) are trained with noise injected in their stochastic hidden layer, with a regularizer that encourages this noise injection. In this paper, we show that injecting noise both in input and in the stochastic hidden layer can be advantageous and we propose a modified variational lower bound as an improved objective function in this setup. When input is corrupted, then the standard VAE lower bound involves marginalizing the encoder conditional distribution over the input noise, which makes the training criterion intractable. Instead, we propose a modified training criterion which corresponds to a tractable bound when input is corrupted. Experimentally, we find that the proposed denoising variational autoencoder (DVAE) yields better average log-likelihood than the VAE and the importance weighted autoencoder on the MNIST and Frey Face datasets.

Downloads

Published

2017-02-13

How to Cite

Im, D. I., Ahn, S., Memisevic, R., & Bengio, Y. (2017). Denoising Criterion for Variational Auto-Encoding Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10777