How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies
DOI:
https://doi.org/10.1609/aies.v7i1.31684Abstract
With the widespread availability of LLMs since the release of ChatGPT and increased public scrutiny, commercial model development appears to have focused their efforts on `safety' training concerning legal liabilities at the expense of social impact evaluation. This mimics a similar trend which we could observe for search engine autocompletion some years prior. We draw on scholarship from NLP and search engine auditing and present a novel evaluation task in the style of autocompletion prompts to assess stereotyping in LLMs. We assess LLMs by using four metrics, namely refusal rates, toxicity, sentiment and regard, with and without safety system prompts. Our findings indicate an improvement to stereotyping outputs with the system prompt, but overall a lack of attention by LLMs under study to certain harms classified as toxic, particularly for prompts about peoples/ethnicities and sexual orientation. Mentions of intersectional identities trigger a disproportionate amount of stereotyping. Finally, we discuss the implications of these findings about stereotyping harms in light of the coming intermingling of LLMs and search and the choice of stereotyping mitigation policy to adopt. We address model builders, academics, NLP practitioners and policy makers, calling for accountability and awareness concerning stereotyping harms, be it for training data curation, leader board design and usage, or social impact measurement.Downloads
Published
2024-10-16
How to Cite
Leidinger, A., & Rogers, R. (2024). How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 839-854. https://doi.org/10.1609/aies.v7i1.31684
Issue
Section
Full Archival Papers