Learning Flexible Latent Representations via Encapsulated Variational Encoders
DOI:
https://doi.org/10.1609/aaai.v33i01.33019913Abstract
Learning flexible latent representation of observed data is an important precursor for most downstream AI applications. To this end, we propose a novel form of variational encoder, i.e., encapsulated variational encoders (EVE) to exert direct control over encoded latent representations along with its learning algorithm, i.e., the EVE compatible automatic variational differentiation inference algorithm. Armed with this property, our derived EVE is capable of learning converged and diverged latent representations. Using CIFAR-10 as an example, we show that the learning of converged latent representations brings a considerable improvement on the discriminative performance of the semi-supervised EVE. Using MNIST as a demonstration, the generative modelling performance of the EVE induced variational auto-encoder (EVAE) can be largely enhanced with the help of learned diverged latent representations.