Generalized Adversarially Learned Inference


  • Yatin Dandi IIT Kanpur
  • Homanga Bharadhwaj University of Toronto, Vector Institute
  • Abhishek Kumar Google Brain
  • Piyush Rai IIT Kanpur


Neural Generative Models & Autoencoders, Representation Learning, Unsupervised & Self-Supervised Learning, Adversarial Learning & Robustness


Allowing effective inference of latent vectors while training GANs can greatly increase their applicability in various downstream tasks. Recent approaches, such as ALI and BiGAN frameworks, develop methods of inference of latent variables in GANs by adversarially training an image generator along with an encoder to match two joint distributions of image and latent vector pairs. We generalize these approaches to incorporate multiple layers of feedback on reconstructions, self-supervision, and other forms of supervision based on prior or learned knowledge about the desired solutions. We achieve this by modifying the discriminator's objective to correctly identify more than two joint distributions of tuples of an arbitrary number of random variables consisting of images, latent vectors, and other variables generated through auxiliary tasks, such as reconstruction and inpainting or as outputs of suitable pre-trained models. We design a non-saturating maximization objective for the generator-encoder pair and prove that the resulting adversarial game corresponds to a global optimum that simultaneously matches all the distributions. Within our proposed framework, we introduce a novel set of techniques for providing self-supervised feedback to the model based on properties, such as patch-level correspondence and cycle consistency of reconstructions. Through comprehensive experiments, we demonstrate the efficacy, scalability, and flexibility of the proposed approach for a variety of tasks. The appendix of the paper can be found at the following link:




How to Cite

Dandi, Y., Bharadhwaj, H., Kumar, A., & Rai, P. (2021). Generalized Adversarially Learned Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7185-7192. Retrieved from



AAAI Technical Track on Machine Learning I