Overcoming Language Disparity in Online Content Classification with Multimodal Learning

Authors

  • Gaurav Verma Georgia Institute of Technology
  • Rohit Mujumdar Georgia Institute of Technology
  • Zijie J. Wang Georgia Institute of Technology
  • Munmun De Choudhury Georgia Institute of Technology
  • Srijan Kumar Georgia Institute of Technology

DOI:

https://doi.org/10.1609/icwsm.v16i1.19356

Keywords:

Text categorization; topic recognition; demographic/gender/age identification, Subjectivity in textual data; sentiment analysis; polarity/opinion identification and extraction, linguistic analyses of social media behavior, Social innovation and effecting change through social media, Measuring predictability of real world phenomena based on social media, e.g., spanning politics, finance, and health

Abstract

Advances in Natural Language Processing (NLP) have revolutionized the way researchers and practitioners address crucial societal problems. Large language models are now the standard to develop state-of-the-art solutions for text detection and classification tasks. However, the development of advanced computational techniques and resources is disproportionately focused on the English language, sidelining a majority of the languages spoken globally. While existing research has developed better multilingual and monolingual language models to bridge this language disparity between English and non-English languages, we explore the promise of incorporating the information contained in images via multimodal machine learning. Our comparative analyses on three detection tasks focusing on crisis information, fake news, and emotion recognition, as well as five high-resource non-English languages, demonstrate that: (a) detection frameworks based on pre-trained large language models like BERT and multilingual-BERT systematically perform better on the English language compared against non-English languages, and (b) including images via multimodal learning bridges this performance gap. We situate our findings with respect to existing work on the pitfalls of large language models, and discuss their theoretical and practical implications.

Downloads

Published

2022-05-31

How to Cite

Verma, G., Mujumdar, R., Wang, Z. J., Choudhury, M. D., & Kumar, S. (2022). Overcoming Language Disparity in Online Content Classification with Multimodal Learning. Proceedings of the International AAAI Conference on Web and Social Media, 16(1), 1040-1051. https://doi.org/10.1609/icwsm.v16i1.19356