Towards Consistent Variational Auto-Encoding (Student Abstract)


  • Yijing Liu Beijing University of Posts and Telecommunications
  • Shuyu Lin University of Oxford
  • Ronald Clark University of Oxford



Variational autoencoders (VAEs) have been a successful approach to learning meaningful representations of data in an unsupervised manner. However, suboptimal representations are often learned because the approximate inference model fails to match the true posterior of the generative model, i.e. an inconsistency exists between the learnt inference and generative models. In this paper, we introduce a novel consistency loss that directly requires the encoding of the reconstructed data point to match the encoding of the original data, leading to better representations. Through experiments on MNIST and Fashion MNIST, we demonstrate the existence of the inconsistency in VAE learning and that our method can effectively reduce such inconsistency.




How to Cite

Liu, Y., Lin, S., & Clark, R. (2020). Towards Consistent Variational Auto-Encoding (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13869-13870.



Student Abstract Track