Recovering True Classifier Performance in Positive-Unlabeled Learning

Authors

  • Shantanu Jain Indiana University
  • Martha White Indiana University
  • Predrag Radivojac Indiana University

DOI:

https://doi.org/10.1609/aaai.v31i1.10937

Keywords:

ROC curve, AUC, Precision Recall curve, asymmetric noise, positive unlabeled learning, class prior estimation

Abstract

A common approach in positive-unlabeled learning is to train a classification model between labeled and unlabeled data. This strategy is in fact known to give an optimal classifier under mild conditions; however, it results in biased empirical estimates of the classifier performance. In this work, we show that the typically used performance measures such as the receiver operating characteristic curve, or the precision recall curve obtained on such data can be corrected with the knowledge of class priors; i.e., the proportions of the positive and negative examples in the unlabeled data. We extend the results to a noisy setting where some of the examples labeled positive are in fact negative and show that the correction also requires the knowledge of the proportion of noisy examples in the labeled positives. Using state-of-the-art algorithms to estimate the positive class prior and the proportion of noise, we experimentally evaluate two correction approaches and demonstrate their efficacy on real-life data.

Downloads

Published

2017-02-13

How to Cite

Jain, S., White, M., & Radivojac, P. (2017). Recovering True Classifier Performance in Positive-Unlabeled Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10937