USDNL: Uncertainty-Based Single Dropout in Noisy Label Learning
DOI:
https://doi.org/10.1609/aaai.v37i9.26264Keywords:
ML: Adversarial Learning & Robustness, ML: Classification and RegressionAbstract
Deep Neural Networks (DNNs) possess powerful prediction capability thanks to their over-parameterization design, although the large model complexity makes it suffer from noisy supervision. Recent approaches seek to eliminate impacts from noisy labels by excluding data points with large loss values and showing promising performance. However, these approaches usually associate with significant computation overhead and lack of theoretical analysis. In this paper, we adopt a perspective to connect label noise with epistemic uncertainty. We design a simple, efficient, and theoretically provable robust algorithm named USDNL for DNNs with uncertainty-based Dropout. Specifically, we estimate the epistemic uncertainty of the network prediction after early training through single Dropout. The epistemic uncertainty is then combined with cross-entropy loss to select the clean samples during training. Finally, we theoretically show the equivalence of replacing selection loss with single cross-entropy loss. Compared to existing small-loss selection methods, USDNL features its simplicity for practical scenarios by only applying Dropout to a standard network, while still achieving high model accuracy. Extensive empirical results on both synthetic and real-world datasets show that USDNL outperforms other methods. Our code is available at https://github.com/kovelxyz/USDNL.Downloads
Published
2023-06-26
How to Cite
Xu, Y., Niu, X., Yang, J., Drew, S., Zhou, J., & Chen, R. (2023). USDNL: Uncertainty-Based Single Dropout in Noisy Label Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10648-10656. https://doi.org/10.1609/aaai.v37i9.26264
Issue
Section
AAAI Technical Track on Machine Learning IV