Legal Minds, Algorithmic Decisions: How LLMs Apply Constitutional Principles in Complex Scenarios

Authors

  • Camilla Bignotti Bank of Italy
  • Carolina Camassa Bank of Italy

DOI:

https://doi.org/10.1609/aies.v7i1.31623

Abstract

In this paper, we conduct an empirical analysis of how large language models (LLMs), specifically GPT-4, interpret constitutional principles in complex decision-making scenarios. We examine rulings from the Italian Constitutional Court on bioethics issues that involve trade-offs between competing values and compare GPT’s legal arguments on these issues to those presented by the State, the Court, and the applicants. Our results indicate that GPT consistently aligns more closely with progressive interpretations of the Constitution, often overlooking competing values and mirroring the applicants’ views rather than the more conservative perspectives of the State or the Court’s moderate positions. Our findings raise important questions about the value alignment of LLMs in scenarios where societal values are in conflict, as our experiment demonstrates GPT’s tendency to align with progressive legal interpretations. We thus underscore the importance of testing alignment in real-world scenarios and considering the implications of deploying LLMs in decision-making processes.

Downloads

Published

2024-10-16

How to Cite

Bignotti, C., & Camassa, C. (2024). Legal Minds, Algorithmic Decisions: How LLMs Apply Constitutional Principles in Complex Scenarios. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 120-130. https://doi.org/10.1609/aies.v7i1.31623