The Gap on Gap: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets

Authors

  • Vid Kocijan University of Oxford
  • Oana-Maria Camburu University of Oxford Alan Turing Institute
  • Thomas Lukasiewicz University of Oxford Alan Turing Institute

Keywords:

Ethics -- Bias, Fairness, Transparency & Privac

Abstract

Diagnostic datasets that can detect biased models are an important prerequisite for bias reduction within natural language processing. However, undesired patterns in the collected data can make such tests incorrect. For example, if the feminine subset of a gender-bias-measuring coreference resolution dataset contains sentences with a longer average distance between the pronoun and the correct candidate, an RNN-based model may perform worse on this subset due to long-term dependencies. In this work, we introduce a theoretically grounded method for weighting test samples to cope with such patterns in the test data. We demonstrate the method on the GAP dataset for coreference resolution. We annotate GAP with spans of all personal names and show that examples in the female subset contain more personal names and a longer distance between pronouns and their referents, potentially affecting the bias score in an undesired way. Using our weighting method, we find the set of weights on the test instances that should be used for coping with these correlations, and we re-evaluate 16 recently released coreference models.

Downloads

Published

2021-05-18

How to Cite

Kocijan, V., Camburu, O.-M., & Lukasiewicz, T. (2021). The Gap on Gap: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 13180-13188. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17557

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I