Learning from Noisy Labels with Complementary Loss Functions
Keywords:Classification and Regression, (Deep) Neural Network Algorithms
AbstractRecent researches reveal that deep neural networks are sensitive to label noises hence leading to poor generalization performance in some tasks. Although different robust loss functions have been proposed to remedy this issue, they suffer from an underfitting problem, thus are not sufficient to learn accurate models. On the other hand, the commonly used Cross Entropy (CE) loss, which shows high performance in standard supervised learning (with clean supervision), is non-robust to label noise. In this paper, we propose a general framework to learn robust deep neural networks with complementary loss functions. In our framework, CE and robust loss play complementary roles in a joint learning objective as per their learning sufficiency and robustness properties respectively. Specifically, we find that by exploiting the memorization effect of neural networks, we can easily filter out a proportion of hard samples and generate reliable pseudo labels for easy samples, and thus reduce the label noise to a quite low level. Then, we simply learn with CE on pseudo supervision and robust loss on original noisy supervision. In this procedure, CE can guarantee the sufficiency of optimization while the robust loss can be regarded as the supplement. Experimental results on benchmark classification datasets indicate that the proposed method helps achieve robust and sufficient deep neural network training simultaneously.
How to Cite
Wang, D.-B., Wen, Y., Pan, L., & Zhang, M.-L. (2021). Learning from Noisy Labels with Complementary Loss Functions. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 10111-10119. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17213
AAAI Technical Track on Machine Learning IV