Steerable Pluralism: Pluralistic Alignment via Few-Shot Comparative Regression

Authors

  • Jadie Adams Kitware Inc.
  • Brian Hu Kitware Inc.
  • Emily Veenhuis Kitware Inc.
  • David Joy Kitware Inc.
  • Bharadwaj Ravichandran Kitware Inc.
  • Aaron Bray Kitware Inc.
  • Anthony Hoogs Kitware Inc.
  • Arslan Basharat Kitware Inc.

DOI:

https://doi.org/10.1609/aies.v8i1.36527

Abstract

Large language models (LLMs) are currently aligned using techniques such as reinforcement learning from human feedback (RLHF). However, these methods use scalar rewards that can only reflect user preferences on average. Pluralistic alignment instead seeks to capture diverse user preferences across a set of attributes, moving beyond just helpfulness and harmlessness. Toward this end, we propose a steerable pluralistic model based on few-shot comparative regression that can adapt to individual user preferences. Our approach leverages in-context learning and reasoning, grounded in a set of fine-grained attributes, to compare response options and make aligned choices. To evaluate our algorithm, we also propose two new steerable pluralistic benchmarks by adapting the Moral Integrity Corpus (MIC) and the HelpSteer2 datasets, demonstrating the applicability of our approach to value-aligned decision-making and reward modeling, respectively. Our few-shot comparative regression approach is interpretable and compatible with different attributes and LLMs, while outperforming multiple baseline and state-of-the-art methods. Our work provides new insights and research directions in pluralistic alignment, enabling a more fair and representative use of LLMs and advancing the state-of-the-art in ethical AI.

Downloads

Published

2025-10-15

How to Cite

Adams, J., Hu, B., Veenhuis, E., Joy, D., Ravichandran, B., Bray, A., … Basharat, A. (2025). Steerable Pluralism: Pluralistic Alignment via Few-Shot Comparative Regression. Proceedings of the AAAI ACM Conference on AI, Ethics, and Society, 8(1), 15–25. https://doi.org/10.1609/aies.v8i1.36527