The Gap on Gap: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets
DOI:
https://doi.org/10.1609/aaai.v35i14.17557Keywords:
Ethics -- Bias, Fairness, Transparency & PrivacAbstract
Diagnostic datasets that can detect biased models are an important prerequisite for bias reduction within natural language processing. However, undesired patterns in the collected data can make such tests incorrect. For example, if the feminine subset of a gender-bias-measuring coreference resolution dataset contains sentences with a longer average distance between the pronoun and the correct candidate, an RNN-based model may perform worse on this subset due to long-term dependencies. In this work, we introduce a theoretically grounded method for weighting test samples to cope with such patterns in the test data. We demonstrate the method on the GAP dataset for coreference resolution. We annotate GAP with spans of all personal names and show that examples in the female subset contain more personal names and a longer distance between pronouns and their referents, potentially affecting the bias score in an undesired way. Using our weighting method, we find the set of weights on the test instances that should be used for coping with these correlations, and we re-evaluate 16 recently released coreference models.Downloads
Published
2021-05-18
How to Cite
Kocijan, V., Camburu, O.-M., & Lukasiewicz, T. (2021). The Gap on Gap: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 13180-13188. https://doi.org/10.1609/aaai.v35i14.17557
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing I