Every Opinion Matters: Evaluating and Building Models with Pluralistic Views

Authors

  • Xiang Li University of Pittsburgh

DOI:

https://doi.org/10.1609/aaai.v39i27.35111

Abstract

The development of large language models has demonstrated robust performance on English-centric benchmarks, which predominantly reflect majority opinions and dominant cultural norms. However, successful deployment in real-world applications requires the ability to handle context-specific and diverse knowledge, which is often underrepresented in training data. Addressing a plurality of perspectives is therefore essential. My research focuses on developing pluralistic evaluation methods to assess the diversity of LLM outputs, with a particular focus on culturally rich common-sense reasoning. Additionally, I work on advancing models that integrate diverse knowledge into LLMs, aiming to bridge the gap between human and AI understanding through the incorporation of varied perspectives using innovative probabilistic frameworks. In this talk, I will emphasize two key directions of my previous work: the probabilistic box model for representing diverse knowledge and probabilistic evaluation for assessing diversity in LLMs, with a focus on distributional aspects. Additionally, I will discuss my efforts to understand model behavior in long-tail scenarios.

Downloads

Published

2025-04-11

How to Cite

Li, X. (2025). Every Opinion Matters: Evaluating and Building Models with Pluralistic Views. Proceedings of the AAAI Conference on Artificial Intelligence, 39(27), 28716–28716. https://doi.org/10.1609/aaai.v39i27.35111