SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations

Authors

  • Hao-Zhe Feng Zhejiang University
  • Kezhi Kong University of Maryland, College Park
  • Minghao Chen Zhejiang University
  • Tianye Zhang Zhejiang University
  • Minfeng Zhu Zhejiang University
  • Wei Chen Zhejiang University

Keywords:

Semi-Supervised Learning, Representation Learning, (Deep) Neural Network Algorithms, Unsupervised & Self-Supervised Learning

Abstract

Semi-supervised variational autoencoders (VAEs) have obtained strong results, but have also encountered the challenge that good ELBO values do not always imply accurate inference results.In this paper, we investigate and propose two causes of this problem: (1) The ELBO objective cannot utilize the label information directly. (2) A bottleneck value exists, and continuing to optimize ELBO after this value will not improve inference accuracy. On the basis of the experiment results, we propose SHOT-VAE to address these problems without introducing additional prior knowledge. The SHOT-VAE offers two contributions: (1) A new ELBO approximation named smooth-ELBO that integrates the label predictive loss into ELBO. (2) An approximation based on optimal interpolation that breaks the ELBO value bottleneck by reducing the margin between ELBO and the data likelihood. The SHOT-VAE achieves good performance with 25.30% error rate on CIFAR-100 with 10k labels and reduces the error rate to 6.11% on CIFAR-10 with 4k labels.

Downloads

Published

2021-05-18

How to Cite

Feng, H.-Z., Kong, K., Chen, M., Zhang, T., Zhu, M., & Chen, W. (2021). SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7413-7421. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16909

Issue

Section

AAAI Technical Track on Machine Learning I