Exploring Social Biases of Large Language Models in a College Artificial Intelligence Course
Keywords:Natural Language Processing, Social Bias, Large Language Models
AbstractLarge neural network-based language models play an increasingly important role in contemporary AI. Although these models demonstrate sophisticated text generation capabilities, they have also been shown to reproduce harmful social biases contained in their training data. This paper presents a project that guides students through an exploration of social biases in large language models. As a final project for an intermediate college course in Artificial Intelligence, students developed a bias probe task for a previously-unstudied aspect of sociolinguistic or sociocultural bias they were interested in exploring. Through the process of constructing a dataset and evaluation metric to measure bias, students mastered key technical concepts, including how to run contemporary neural networks for natural language processing tasks; construct datasets and evaluation metrics; and analyze experimental results. Students reported their findings in an in-class presentation and a final report, recounting patterns of predictions that surprised, unsettled, and sparked interest in advocating for technology that reflects a more diverse set of backgrounds and experiences. Through this project, students engage with and even contribute to a growing body of scholarly work on social biases in large language models.
How to Cite
Kolisko, S., & Anderson, C. J. (2023). Exploring Social Biases of Large Language Models in a College Artificial Intelligence Course. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15825-15833. https://doi.org/10.1609/aaai.v37i13.26879
EAAI Symposium: Main Track