Barely-Supervised Learning: Semi-supervised Learning with Very Few Labeled Images

Authors

  • Thomas Lucas NAVER LABS Europe
  • Philippe Weinzaepfel NAVER LABS Europe
  • Gregory Rogez NAVER LABS Europe

DOI:

https://doi.org/10.1609/aaai.v36i2.20082

Keywords:

Computer Vision (CV), Machine Learning (ML)

Abstract

This paper tackles the problem of semi-supervised learning when the set of labeled samples is limited to a small number of images per class, typically less than 10, problem that we refer to as barely-supervised learning. We analyze in depth the behavior of a state-of-the-art semi-supervised method, FixMatch, which relies on a weakly-augmented version of an image to obtain supervision signal for a more strongly-augmented version. We show that it frequently fails in barely-supervised scenarios, due to a lack of training signal when no pseudo-label can be predicted with high confidence. We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels. We then propose two methods to refine the pseudo-label selection process which lead to further improvements.The first one relies on a per-sample history of the model predictions, akin to a voting scheme. The second iteratively up-dates class-dependent confidence thresholds to better explore classes that are under-represented in the pseudo-labels. Our experiments show that our approach performs significantly better on STL-10 in the barely-supervised regime,e.g. with 4 or 8 labeled images per class.

Downloads

Published

2022-06-28

How to Cite

Lucas, T., Weinzaepfel, P., & Rogez, G. (2022). Barely-Supervised Learning: Semi-supervised Learning with Very Few Labeled Images. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1881-1889. https://doi.org/10.1609/aaai.v36i2.20082

Issue

Section

AAAI Technical Track on Computer Vision II