Meta-Amortized Variational Inference and Learning

Authors

  • Mike Wu Stanford University
  • Kristy Choi Stanford University
  • Noah Goodman Stanford University
  • Stefano Ermon Stanford University

DOI:

https://doi.org/10.1609/aaai.v34i04.6111

Abstract

Despite the recent success in probabilistic modeling and their applications, generative models trained using traditional inference techniques struggle to adapt to new distributions, even when the target distribution may be closely related to the ones seen during training. In this work, we present a doubly-amortized variational inference procedure as a way to address this challenge. By sharing computation across not only a set of query inputs, but also a set of different, related probabilistic models, we learn transferable latent representations that generalize across several related distributions. In particular, given a set of distributions over images, we find the learned representations to transfer to different data transformations. We empirically demonstrate the effectiveness of our method by introducing the MetaVAE, and show that it significantly outperforms baselines on downstream image classification tasks on MNIST (10-50%) and NORB (10-35%).

Downloads

Published

2020-04-03

How to Cite

Wu, M., Choi, K., Goodman, N., & Ermon, S. (2020). Meta-Amortized Variational Inference and Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6404-6412. https://doi.org/10.1609/aaai.v34i04.6111

Issue

Section

AAAI Technical Track: Machine Learning