Enhancing Noise-Robust Losses for Large-Scale Noisy Data Learning

Authors

  • Max Staats Center for Scalable Data Analytics and Artificial Intelligence Universität Leipzig
  • Matthias Thamm Universität Leipzig
  • Bernd Rosenow Universität Leipzig

DOI:

https://doi.org/10.1609/aaai.v39i7.32752

Abstract

Large annotated datasets inevitably contain noisy labels, which poses a major challenge for training deep neural networks as they easily memorize the labels. Noise-robust loss functions have emerged as a notable strategy to counteract this issue, but it remains challenging to create a robust loss function which is not susceptible to underfitting. Through a quantitative approach, this paper explores the limited overlap between the network output at initialization and regions of non-vanishing gradients of bounded loss functions in the initial learning phase. Using these insights, we address underfitting of several noise robust losses with a novel method denoted as logit bias, which adds a real number epsilon to the logit at the position of the correct class. The logit bias enables these losses to achieve state-of-the-art results, even on datasets like WebVision, consisting of over a million images from 1000 classes. In addition, we demonstrate that our method can be used to determine optimal parameters for several loss functions – without having to train networks. Remarkably, our method determines the hyperparameters based on the number of classes, resulting in loss functions which require zero dataset or noise-dependent parameters.

Published

2025-04-11

How to Cite

Staats, M., Thamm, M., & Rosenow, B. (2025). Enhancing Noise-Robust Losses for Large-Scale Noisy Data Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 7006–7014. https://doi.org/10.1609/aaai.v39i7.32752

Issue

Section

AAAI Technical Track on Computer Vision VI