Gender and Racial Stereotype Detection in Legal Opinion Word Embeddings

Authors

  • Sean Matthews Thomson Reuters
  • John Hudzina Thomson Reuters
  • Dawn Sepehr Thomson Reuters

DOI:

https://doi.org/10.1609/aaai.v36i11.21461

Keywords:

AI For Social Impact (AISI Track Papers Only), Humans And AI (HAI), Philosophy And Ethics Of AI (PEAI)

Abstract

Studies have shown that some Natural Language Processing (NLP) systems encode and replicate harmful biases with potential adverse ethical effects in our society. In this article, we propose an approach for identifying gender and racial stereotypes in word embeddings trained on judicial opinions from U.S. case law. Embeddings containing stereotype information may cause harm when used by downstream systems for classification, information extraction, question answering, or other machine learning systems used to build legal research tools. We first explain how previously proposed methods for identifying these biases are not well suited for use with word embeddings trained on legal opinion text. We then propose a domain adapted method for identifying gender and racial biases in the legal domain. Our analyses using these methods suggest that racial and gender biases are encoded into word embeddings trained on legal opinions. These biases are not mitigated by exclusion of historical data, and appear across multiple large topical areas of the law. Implications for downstream systems that use legal opinion word embeddings and suggestions for potential mitigation strategies based on our observations are also discussed.

Downloads

Published

2022-06-28

How to Cite

Matthews, S., Hudzina, J., & Sepehr, D. (2022). Gender and Racial Stereotype Detection in Legal Opinion Word Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12026-12033. https://doi.org/10.1609/aaai.v36i11.21461