A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise

Authors

  • Jongwoo Ko KAIST
  • Bongsoo Yi University of North Carolina at Chapel Hill
  • Se-Young Yun KAIST

DOI:

https://doi.org/10.1609/aaai.v37i7.26004

Keywords:

ML: Adversarial Learning & Robustness, ML: Deep Neural Network Algorithms

Abstract

As deep neural networks can easily overfit noisy labels, robust training in the presence of noisy labels is becoming an important challenge in modern deep learning. While existing methods address this problem in various directions, they still produce unpredictable sub-optimal results since they rely on the posterior information estimated by the feature extractor corrupted by noisy labels. Lipschitz regularization successfully alleviates this problem by training a robust feature extractor, but it requires longer training time and expensive computations. Motivated by this, we propose a simple yet effective method, called ALASCA, which efficiently provides a robust feature extractor under label noise. ALASCA integrates two key ingredients: (1) adaptive label smoothing based on our theoretical analysis that label smoothing implicitly induces Lipschitz regularization, and (2) auxiliary classifiers that enable practical application of intermediate Lipschitz regularization with negligible computations. We conduct wide-ranging experiments for ALASCA and combine our proposed method with previous noise-robust methods on several synthetic and real-world datasets. Experimental results show that our framework consistently improves the robustness of feature extractors and the performance of existing baselines with efficiency.

Downloads

Published

2023-06-26

How to Cite

Ko, J., Yi, B., & Yun, S.-Y. (2023). A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8325-8333. https://doi.org/10.1609/aaai.v37i7.26004

Issue

Section

AAAI Technical Track on Machine Learning II