Intelligent Calibration for Bias Reduction in Sentiment Corpora Annotation Process

Authors

  • Idan Toker Bar Ilan University
  • David Sarne Bar-Ilan University
  • Jonathan Schler Holon Institute of Technology (HIT)

DOI:

https://doi.org/10.1609/aaai.v38i9.28882

Keywords:

HAI: Crowd Sourcing and Human Computation

Abstract

This paper focuses in the inherent anchoring bias present in sequential reviews-sentiment corpora annotation processes. It proposes employing a limited subset of meticulously chosen reviews at the outset of the process, as a means of calibration, effectively mitigating the phenomenon. Through extensive experimentation we validate the phenomenon of sentiment bias in the annotation process and show that its magnitude can be influenced by pre-calibration. Furthermore, we show that the choice of the calibration set matters, hence the need for effective guidelines for choosing the reviews to be included in it. A comparison of annotators performance with the proposed calibration to annotation processes that do not use calibration or use a randomly-picked calibration set, reveals that indeed the calibration set picked is highly effective---it manages to substantially reduce the average absolute error compared to the other cases. Furthermore, the proposed selection guidelines are found to be highly robust in picking an effective calibration set also for domains different than the one based on which these rules were extracted.

Published

2024-03-24

How to Cite

Toker, I., Sarne, D., & Schler, J. (2024). Intelligent Calibration for Bias Reduction in Sentiment Corpora Annotation Process. Proceedings of the AAAI Conference on Artificial Intelligence, 38(9), 10172-10179. https://doi.org/10.1609/aaai.v38i9.28882

Issue

Section

AAAI Technical Track on Humans and AI