Debiasing Evaluations That Are Biased by Evaluations

Authors

  • Jingyan Wang Carnegie Mellon University
  • Ivan Stelmakh Carnegie Mellon University
  • Yuting Wei Carnegie Mellon University
  • Nihar B. Shah Carnegie Mellon University

Keywords:

Learning Preferences or Rankings, Ethics -- Bias, Fairness, Transparency & Privacy

Abstract

It is common to evaluate a set of items by soliciting people to rate them. For example, universities ask students to rate the teaching quality of their instructors, and conference organizers ask authors of submissions to evaluate the quality of the reviews. However, in these applications, students often give a higher rating to a course if they receive higher grades in a course, and authors often give a higher rating to the reviews if their papers are accepted to the conference. In this work, we call these external factors the "outcome" experienced by people, and consider the problem of mitigating these outcome-induced biases in the given ratings when some information about the outcome is available. We formulate the information about the outcome as a known partial ordering on the bias. We propose a debiasing method by solving a regularized optimization problem under this ordering constraint, and also provide a carefully designed cross-validation method that adaptively chooses the appropriate amount of regularization. We provide theoretical guarantees on the performance of our algorithm, as well as experimental evaluations.

Downloads

Published

2021-05-18

How to Cite

Wang, J., Stelmakh, I., Wei, Y., & Shah, N. B. (2021). Debiasing Evaluations That Are Biased by Evaluations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 10120-10128. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17214

Issue

Section

AAAI Technical Track on Machine Learning IV