LLM Voting: Human Choices and AI Collective Decision-Making

Authors

  • Joshua C. Yang ETH Zurich
  • Damian Dailisan ETH Zurich
  • Marcin Korecki ETH Zurich
  • Carina I. Hausladen ETH Zurich
  • Dirk Helbing ETH Zurich

DOI:

https://doi.org/10.1609/aies.v7i1.31758

Abstract

This paper investigates the voting behaviors of Large Language Models (LLMs), specifically GPT-4 and LLaMA-2, their biases, and how they align with human voting patterns. Our methodology involved using a dataset from a human voting experiment to establish a baseline for human preferences and conducting a corresponding experiment with LLM agents. We observed that the choice of voting methods and the presentation order influenced LLM voting outcomes. We found that varying the persona can reduce some of these biases and enhance alignment with human choices. While the Chain-of-Thought approach did not improve prediction accuracy, it has potential for AI explainability in the voting process. We also identified a trade-off between preference diversity and alignment accuracy in LLMs, influenced by different temperature settings. Our findings indicate that LLMs may lead to less diverse collective outcomes and biased assumptions when used in voting scenarios, emphasizing the need for cautious integration of LLMs into democratic processes.

Downloads

Published

2024-10-16

How to Cite

Yang, J. C., Dailisan, D., Korecki, M., Hausladen, C. I., & Helbing, D. (2024). LLM Voting: Human Choices and AI Collective Decision-Making. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1696-1708. https://doi.org/10.1609/aies.v7i1.31758