Measuring Human-AI Value Alignment in Large Language Models

Authors

  • Hakim Norhashim National University of Singapore AI Singapore
  • Jungpil Hahn National University of Singapore AI Singapore

DOI:

https://doi.org/10.1609/aies.v7i1.31703

Abstract

This paper seeks to quantify the human-AI value alignment in large language models. Alignment between humans and AI has become a critical area of research to mitigate potential harm posed by AI. In tandem with this need, developers have incorporated a values-based approach towards model development where ethical principles are integrated from its inception. However, ensuring that these values are reflected in outputs remains a challenge. In addition, studies have noted that models lack consistency when producing outputs, which in turn can affect their function. Such variability in responses would impact human-AI value alignment as well, particularly where consistent alignment is critical. Fundamentally, the task of uncovering a model’s alignment is one of explainability – where understanding how these complex models behave is essential in order to assess their alignment. This paper examines the problem through a case study of GPT-3.5. By repeatedly prompting the model with scenarios based on a dataset of moral stories, we aggregate the model’s alignment with human values to produce a human-AI value alignment metric. Moreover, by using a comprehensive taxonomy of human values, we uncover the latent value profile represented by these outputs, thereby determining the extent of human-AI value alignment.

Downloads

Published

2024-10-16

How to Cite

Norhashim, H., & Hahn, J. (2024). Measuring Human-AI Value Alignment in Large Language Models. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1063-1073. https://doi.org/10.1609/aies.v7i1.31703