Analysing the Noise Model Error for Realistic Noisy Label Data

Authors

  • Michael A. Hedderich Saarland University, Saarland Informatics Campus, Germany
  • Dawei Zhu Saarland University, Saarland Informatics Campus, Germany
  • Dietrich Klakow Saarland University, Saarland Informatics Campus, Germany

Keywords:

Classification and Regression, Semi-Supervised Learning, Information Extraction

Abstract

Distant and weak supervision allow to obtain large amounts of labeled training data quickly and cheaply, but these automatic annotations tend to contain a high amount of errors. A popular technique to overcome the negative effects of these noisy labels is noise modelling where the underlying noise process is modelled. In this work, we study the quality of these estimated noise models from the theoretical side by deriving the expected error of the noise model. Apart from evaluating the theoretical results on commonly used synthetic noise, we also publish NoisyNER, a new noisy label dataset from the NLP domain that was obtained through a realistic distant supervision technique. It provides seven sets of labels with differing noise patterns to evaluate different noise levels on the same instances. Parallel, clean labels are available making it possible to study scenarios where a small amount of gold-standard data can be leveraged. Our theoretical results and the corresponding experiments give insights into the factors that influence the noise model estimation like the noise distribution and the sampling technique.

Downloads

Published

2021-05-18

How to Cite

Hedderich, M. A., Zhu, D., & Klakow, D. (2021). Analysing the Noise Model Error for Realistic Noisy Label Data. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7675-7684. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16938

Issue

Section

AAAI Technical Track on Machine Learning II