Script, Language, and Labels: Overcoming Three Discrepancies for Low-Resource Language Specialization

Authors

  • Jaeseong Lee Seoul National University
  • Dohyeon Lee Seoul National University
  • Seung-won Hwang Seoul National University

DOI:

https://doi.org/10.1609/aaai.v37i11.26528

Keywords:

SNLP: Machine Translation & Multilinguality, SNLP: Language Models, SNLP: Learning & Optimization for SNLP, SNLP: Syntax -- Tagging, Chunking & Parsing

Abstract

Although multilingual pretrained models (mPLMs) enabled support of various natural language processing in diverse languages, its limited coverage of 100+ languages lets 6500+ languages remain ‘unseen’. One common approach for an unseen language is specializing the model for it as target, by performing additional masked language modeling (MLM) with the target language corpus. However, we argue that, due to the discrepancy from multilingual MLM pretraining, a naive specialization as such can be suboptimal. Specifically, we pose three discrepancies to overcome. Script and linguistic discrepancy of the target language from the related seen languages, hinder a positive transfer, for which we propose to maximize representation similarity, unlike existing approaches maximizing overlaps. In addition, label space for MLM prediction can vary across languages, for which we propose to reinitialize top layers for a more effective adaptation. Experiments over four different language families and three tasks shows that our method improves the task performance of unseen languages with statistical significance, while previous approach fails to.

Downloads

Published

2023-06-26

How to Cite

Lee, J., Lee, D., & Hwang, S.- won. (2023). Script, Language, and Labels: Overcoming Three Discrepancies for Low-Resource Language Specialization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13004-13013. https://doi.org/10.1609/aaai.v37i11.26528

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing