Training Set Debugging Using Trusted Items

Authors

  • Xuezhou Zhang University of Wisconsin-Madison
  • Xiaojin Zhu University of Wisconsin-Madison
  • Stephen Wright University of Wisconsin-Madison

Keywords:

Machine Learning, Debugging, Data Cleaning, trustworthy machine learning

Abstract

Training set bugs are flaws in the data that adversely affect machine learning. The training set is usually too large for manual inspection, but one may have the resources to verify a few trusted items. The set of trusted items may not by itself be adequate for learning, so we propose an algorithm that uses these items to identify bugs in the training set and thus improves learning. Specifically, our approach seeks the smallest set of changes to the training set labels such that the model learned from this corrected training set predicts labels of the trusted items correctly. We flag the items whose labels are changed as potential bugs, whose labels can be checked for veracity by human experts. To find the bugs in this way is a challenging combinatorial bilevel optimization problem, but it can be relaxed into a continuous optimization problem.Experiments on toy and real data demonstrate that our approach can identify training set bugs effectively and suggest appropriate changes to the labels. Our algorithm is a step toward trustworthy machine learning.

Downloads

Published

2018-04-29

How to Cite

Zhang, X., Zhu, X., & Wright, S. (2018). Training Set Debugging Using Trusted Items. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11610