Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning

Authors

  • Paola Cascante-Bonilla University of Virginia
  • Fuwen Tan University of Virginia
  • Yanjun Qi University of Virginia
  • Vicente Ordonez University of Virginia

DOI:

https://doi.org/10.1609/aaai.v35i8.16852

Keywords:

Semi-Supervised Learning

Abstract

In this paper we revisit the idea of pseudo-labeling in the context of semi-supervised learning where a learning algorithm has access to a small set of labeled samples and a large set of unlabeled samples. Pseudo-labeling works by applying pseudo-labels to samples in the unlabeled set by using a model trained on combination of the labeled samples and any previously pseudo-labeled samples, and iteratively repeating this process in a self-training cycle. Current methods seem to have abandoned this approach in favor of consistency regularization methods that train models under a combination of different styles of self-supervised losses on the unlabeled samples and standard supervised losses on the labeled samples. We empirically demonstrate that pseudo-labeling can in fact be competitive with the state-of-the-art, while being more resilient to out-of-distribution samples in the unlabeled set. We identify two key factors that allow pseudo-labeling to achieve such remarkable results (1) applying curriculum learning principles and (2) avoiding concept drift by restarting model parameters before each self-training cycle. We obtain 94.91% accuracy on CIFAR-10 using only 4,000 labeled samples, and 68.87% top-1 accuracy on Imagenet-ILSVRC using only 10% of the labeled samples.

Downloads

Published

2021-05-18

How to Cite

Cascante-Bonilla, P., Tan, F., Qi, Y., & Ordonez, V. (2021). Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6912-6920. https://doi.org/10.1609/aaai.v35i8.16852

Issue

Section

AAAI Technical Track on Machine Learning I