Few-Shot Cross-Lingual Stance Detection with Sentiment-Based Pre-training
Keywords:Speech & Natural Language Processing (SNLP), Machine Learning (ML)
AbstractThe goal of stance detection is to determine the viewpoint expressed in a piece of text towards a target. These viewpoints or contexts are often expressed in many different languages depending on the user and the platform, which can be a local news outlet, a social media platform, a news forum, etc. Most research on stance detection, however, has been limited to working with a single language and on a few limited targets, with little work on cross-lingual stance detection. Moreover, non-English sources of labelled data are often scarce and present additional challenges. Recently, large multilingual language models have substantially improved the performance on many non-English tasks, especially such with a limited number of examples. This highlights the importance of model pre-training and its ability to learn from few examples. In this paper, we present the most comprehensive study of cross-lingual stance detection to date: we experiment with 15 diverse datasets in 12 languages from 6 language families, and with 6 low-resource evaluation settings each. For our experiments, we build on pattern-exploiting training (PET), proposing the addition of a novel label encoder to simplify the verbalisation procedure. We further propose sentiment-based generation of stance data for pre-training, which shows sizeable improvement of more than 6% F1 absolute in few-shot learning settings compared to several strong baselines.
How to Cite
Hardalov, M., Arora, A., Nakov, P., & Augenstein, I. (2022). Few-Shot Cross-Lingual Stance Detection with Sentiment-Based Pre-training. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10729-10737. https://doi.org/10.1609/aaai.v36i10.21318
AAAI Technical Track on Speech and Natural Language Processing