SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models
DOI:
https://doi.org/10.1609/aaai.v38i19.30142Keywords:
GeneralAbstract
Current datasets for unwanted social bias auditing are limited to studying protected demographic features such as race and gender. In this work, we introduce a comprehensive benchmark that is meant to capture the amplification of social bias, via stigmas, in generative language models. Taking inspiration from social science research, we start with a documented list of 93 US-centric stigmas and curate a question-answering (QA) dataset which involves simple social situations. Our benchmark, SocialStigmaQA, contains roughly 10K prompts, with a variety of prompt styles, carefully constructed to systematically test for both social bias and model robustness. We present results for SocialStigmaQA with two open source generative language models and we find that the proportion of socially biased output ranges from 45% to 59% across a variety of decoding strategies and prompting styles. We demonstrate that the deliberate design of the templates in our benchmark (e.g., adding biasing text to the prompt or using different verbs that change the answer that indicates bias) impacts the model tendencies to generate socially biased output. Additionally, through manual evaluation, we discover problematic patterns in the generated chain-of-thought output that range from subtle bias to lack of reasoning. Warning: This paper contains examples of text which are toxic, biased, and potentially harmful.Downloads
Published
2024-03-24
How to Cite
Nagireddy, M., Chiazor, L., Singh, M., & Baldini, I. (2024). SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21454-21462. https://doi.org/10.1609/aaai.v38i19.30142
Issue
Section
AAAI Technical Track on Safe, Robust and Responsible AI Track