Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning
DOI:
https://doi.org/10.1609/aaai.v38i6.28466Keywords:
CV: Language and Vision, ML: Life-Long and Continual Learning, NLP: Machine Translation, Multilinguality, Cross-Lingual NLPAbstract
While vision-language pre-trained models (VL-PTMs) have advanced multimodal research in recent years, their mastery in a few languages like English restricts their applicability in broader communities. To this end, there is an increasing interest in developing multilingual VL models via a joint-learning setup, which, however, could be unrealistic due to expensive costs and data availability. In this work, we propose to extend VL-PTMs' language capacity by continual language learning (CLL), where a model needs to update its linguistic knowledge incrementally without suffering from catastrophic forgetting (CF). We begin our study by introducing a model dubbed CLL-CLIP, which builds upon CLIP, a prevailing VL-PTM that has acquired image-English text alignment. Specifically, CLL-CLIP contains an expandable token embedding layer to handle linguistic differences. It solely trains token embeddings to improve memory stability and is optimized under cross-modal and cross-lingual objectives to learn the alignment between images and multilingual texts. To alleviate CF raised by covariate shift and lexical overlap, we further propose a novel approach that ensures the identical distribution of all token embeddings during initialization and regularizes token embedding learning during training. We construct a CLL benchmark covering 36 languages based on MSCOCO and XM3600 datasets and then evaluate multilingual image-text retrieval performance. Extensive experiments verify the effectiveness of CLL-CLIP and show that our approach can boost CLL-CLIP, e.g., by 6.7% in text-to-image average Recall@1 on XM3600, and improve various state-of-the-art methods consistently. Our code and data are available at https://github.com/yangbang18/CLFM.Downloads
Published
2024-03-24
How to Cite
Yang, B., Dai, Y., Cheng, X., Li, Y., Raza, A., & Zou, Y. (2024). Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6458-6466. https://doi.org/10.1609/aaai.v38i6.28466
Issue
Section
AAAI Technical Track on Computer Vision V