Rethinking Label Refurbishment: Model Robustness under Label Noise

Authors

  • Yangdi Lu McMaster University
  • Zhiwei Xu McMaster University
  • Wenbo He McMaster University

DOI:

https://doi.org/10.1609/aaai.v37i12.26751

Keywords:

General

Abstract

A family of methods that generate soft labels by mixing the hard labels with a certain distribution, namely label refurbishment, are widely used to train deep neural networks. However, some of these methods are still poorly understood in the presence of label noise. In this paper, we revisit four label refurbishment methods and reveal the strong connection between them. We find that they affect the neural network models in different manners. Two of them smooth the estimated posterior for regularization effects, and the other two force the model to produce high-confidence predictions. We conduct extensive experiments to evaluate related methods and observe that both effects improve the model generalization under label noise. Furthermore, we theoretically show that both effects lead to generalization guarantees on the clean distribution despite being trained with noisy labels.

Downloads

Published

2023-06-26

How to Cite

Lu, Y., Xu, Z., & He, W. (2023). Rethinking Label Refurbishment: Model Robustness under Label Noise. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15000-15008. https://doi.org/10.1609/aaai.v37i12.26751

Issue

Section

AAAI Special Track on Safe and Robust AI