Fairness-Aware Structured Pruning in Transformers

Authors

  • Abdelrahman Zayed Mila Polytechnique Montreal
  • Gonçalo Mordido Mila Polytechnique Montreal
  • Samira Shabanian Independent Researcher
  • Ioana Baldini IBM Research
  • Sarath Chandar Mila Polytechnique Montreal Canada CIFAR AI Chair

DOI:

https://doi.org/10.1609/aaai.v38i20.30256

Keywords:

General

Abstract

The increasing size of large language models (LLMs) has introduced challenges in their training and inference. Removing model components is perceived as a solution to tackle the large model sizes, however, existing pruning methods solely focus on performance, without considering an essential aspect for the responsible use of LLMs: model fairness. It is crucial to address the fairness of LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish communities, among others, as they are being deployed and available to a wide audience. In this work, first, we investigate how attention heads impact fairness and performance in pre-trained transformer-based language models. We then propose a novel method to prune the attention heads that negatively impact fairness while retaining the heads critical for performance, i.e. language modeling capabilities. Our approach is practical in terms of time and resources, as it does not require fine-tuning the final pruned, and fairer, model. Our findings demonstrate a reduction in gender bias by 19%, 19.5%, 39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different sizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased model, with only a slight decrease in performance. WARNING: This work uses language that is offensive in nature.

Published

2024-03-24

How to Cite

Zayed, A., Mordido, G., Shabanian, S., Baldini, I., & Chandar, S. (2024). Fairness-Aware Structured Pruning in Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22484-22492. https://doi.org/10.1609/aaai.v38i20.30256