Let the Void Be Void: Robust Open-Set Semi-Supervised Learning via Selective Non-Alignment

Authors

  • You Rim Choi Seoul National University
  • Subeom Park Seoul National University
  • Seojun Heo Seoul National University
  • Eunchung Noh Samsung Electronics
  • Hyung-Sin Kim Seoul National University

DOI:

https://doi.org/10.1609/aaai.v40i25.39194

Abstract

Open-set semi-supervised learning (OSSL) leverages unlabeled data containing both in-distribution (ID) and unknown out-of-distribution (OOD) samples, aiming simultaneously to improve closed-set accuracy and detect novel OOD instances. Existing methods either discard valuable information from uncertain samples or force-align every unlabeled sample into one or a few synthetic “catch-all” representations, resulting in geometric collapse and overconfidence on only seen OODs. To address the limitations, we introduce selective non-alignment, adding a novel “skip” operator into conventional pull and push operations of contrastive learning. Our framework, SkipAlign, selectively skips alignment (pulling) for low-confidence unlabeled samples, retaining only gentle repulsion against ID prototypes. This approach transforms uncertain samples into a pure repulsion signal, resulting in tighter ID clusters and naturally dispersed OOD features. Extensive experiments demonstrate that SkipAlign significantly outperforms state-of-the-art methods in detecting unseen OOD data without sacrificing ID classification accuracy.

Downloads

Published

2026-03-14

How to Cite

Choi, Y. R., Park, S., Heo, S., Noh, E., & Kim, H.-S. (2026). Let the Void Be Void: Robust Open-Set Semi-Supervised Learning via Selective Non-Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 20579-20587. https://doi.org/10.1609/aaai.v40i25.39194

Issue

Section

AAAI Technical Track on Machine Learning II