Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design

Authors

  • Steven Jecmen Carnegie Mellon University
  • Hanrui Zhang Carnegie Mellon University
  • Ryan Liu Carnegie Mellon University
  • Fei Fang Carnegie Mellon University
  • Vincent Conitzer Duke University
  • Nihar B. Shah Carnegie Mellon University

DOI:

https://doi.org/10.1609/hcomp.v10i1.21991

Keywords:

Matching, Assignment, Peer Review, Paper Assignment, Experiment Design

Abstract

Many scientific conferences employ a two-phase paper review process, where some papers are assigned additional reviewers after the initial reviews are submitted. Many conferences also design and run experiments on their paper review process, where some papers are assigned reviewers who provide reviews under an experimental condition. In this paper, we consider the question: how should reviewers be divided between phases or conditions in order to maximize total assignment similarity? We make several contributions towards answering this question. First, we prove that when the set of papers requiring additional review is unknown, a simplified variant of this problem is NP-hard. Second, we empirically show that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment. This uniformly random choice is practical for both the two-phase and conference experiment design settings. Third, we provide explanations of this phenomenon by providing theoretical bounds on the suboptimality of this random strategy under certain natural conditions. From these easily-interpretable conditions, we provide actionable insights to conference program chairs about whether a random reviewer split is suitable for their conference.

Downloads

Published

2022-10-14

How to Cite

Jecmen, S., Zhang, H., Liu, R., Fang, F., Conitzer, V., & Shah, N. B. (2022). Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 10(1), 102-113. https://doi.org/10.1609/hcomp.v10i1.21991