How Linguistically Fair Are Multilingual Pre-Trained Language Models?
DOI:
https://doi.org/10.1609/aaai.v35i14.17505Keywords:
Ethics -- Bias, Fairness, Transparency & Privac, Language Models, Bias, Fairness & EquityAbstract
Massively multilingual pre-trained language models, such as mBERT and XLM-RoBERTa, have received significant attention in the recent NLP literature for their excellent capability towards crosslingual zero-shot transfer of NLP tasks. This is especially promising because a large number of languages have no or very little labeled data for supervised learning. Moreover, a substantially improved performance on low resource languages without any significant degradation of accuracy for high resource languages lead us to believe that these models will help attain a fairer distribution of language technologies despite the prevalent unfair and extremely skewed distribution of resources across the world’s languages. Nevertheless, these models, and the experimental approaches adopted by the researchers to arrive at those, have been criticised by some for lacking a nuanced and thorough comparison of benefits across languages and tasks. A related and important question that has received little attention is how to choose from a set of models, when no single model significantly outperforms the others on all tasks and languages. As we discuss in this paper, this is often the case, and the choices are usually made without a clear articulation of reasons or underlying fairness assumptions. In this work, we scrutinize the choices made in previous work, and propose a few different strategies for fair and efficient model selection based on the principles of fairness in economics and social choice theory. In particular, we emphasize Rawlsian fairness, which provides an appropriate framework for making fair (with respect to languages, or tasks, or both) choices while selecting multilingual pre-trained language models for a practical or scientific set-up.Downloads
Published
2021-05-18
How to Cite
Choudhury, M., & Deshpande, A. (2021). How Linguistically Fair Are Multilingual Pre-Trained Language Models?. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12710-12718. https://doi.org/10.1609/aaai.v35i14.17505
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing I