Debiasing Multimodal Sarcasm Detection with Contrastive Learning

Authors

  • Mengzhao Jia Shandong University
  • Can Xie Shandong University
  • Liqiang Jing University of Texas at Dallas

DOI:

https://doi.org/10.1609/aaai.v38i16.29795

Keywords:

NLP: Safety and Robustness, NLP: Ethics -- Bias, Fairness, Transparency & Privacy, NLP: Language Grounding & Multi-modal NLP

Abstract

Despite commendable achievements made by existing work, prevailing multimodal sarcasm detection studies rely more on textual content over visual information. It unavoidably induces spurious correlations between textual words and labels, thereby significantly hindering the models' generalization capability. To address this problem, we define the task of out-of-distribution (OOD) multimodal sarcasm detection, which aims to evaluate models' generalizability when the word distribution is different in training and testing settings. Moreover, we propose a novel debiasing multimodal sarcasm detection framework with contrastive learning, which aims to mitigate the harmful effect of biased textual factors for robust OOD generalization. In particular, we first design counterfactual data augmentation to construct the positive samples with dissimilar word biases and negative samples with similar word biases. Subsequently, we devise an adapted debiasing contrastive learning mechanism to empower the model to learn robust task-relevant features and alleviate the adverse effect of biased words. Extensive experiments show the superiority of the proposed framework.

Downloads

Published

2024-03-24

How to Cite

Jia, M., Xie, C., & Jing, L. (2024). Debiasing Multimodal Sarcasm Detection with Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 18354-18362. https://doi.org/10.1609/aaai.v38i16.29795

Issue

Section

AAAI Technical Track on Natural Language Processing I