Analysing the Noise Model Error for Realistic Noisy Label Data
Keywords:Classification and Regression, Semi-Supervised Learning, Information Extraction
AbstractDistant and weak supervision allow to obtain large amounts of labeled training data quickly and cheaply, but these automatic annotations tend to contain a high amount of errors. A popular technique to overcome the negative effects of these noisy labels is noise modelling where the underlying noise process is modelled. In this work, we study the quality of these estimated noise models from the theoretical side by deriving the expected error of the noise model. Apart from evaluating the theoretical results on commonly used synthetic noise, we also publish NoisyNER, a new noisy label dataset from the NLP domain that was obtained through a realistic distant supervision technique. It provides seven sets of labels with differing noise patterns to evaluate different noise levels on the same instances. Parallel, clean labels are available making it possible to study scenarios where a small amount of gold-standard data can be leveraged. Our theoretical results and the corresponding experiments give insights into the factors that influence the noise model estimation like the noise distribution and the sampling technique.
How to Cite
Hedderich, M. A., Zhu, D., & Klakow, D. (2021). Analysing the Noise Model Error for Realistic Noisy Label Data. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7675-7684. https://doi.org/10.1609/aaai.v35i9.16938
AAAI Technical Track on Machine Learning II