Investigating and Mitigating Undesirable Biases in Large Language Models
DOI:
https://doi.org/10.1609/aaai.v39i28.35214Abstract
The rise of large language models (LLMs) has revolutionized natural language processing, offering immense capabilities across various applications. The widespread integration of these models into commonplace technology has brought to light deep concerns about the biases they encompass, which could serve to perpetuate negative preconceptions and social injustices. The scope of my research includes social biases, brand biases, the impact of personas on bias, and stereotypes in low-resource languages. My contributions aim to deepen our understanding of these biases and develop methodologies to mitigate them, enhancing the fairness and utility of LLMs across diverse global applications.Downloads
Published
2025-04-11
How to Cite
Kamruzzaman, M. (2025). Investigating and Mitigating Undesirable Biases in Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(28), 29273-29274. https://doi.org/10.1609/aaai.v39i28.35214
Issue
Section
AAAI Doctoral Consortium Track