Multi-Mask Label Mapping for Prompt-Based Learning


  • Jirui Qi Beihang University
  • Richong Zhang Beihang University
  • Jaein Kim Beihang University
  • Junfan Chen Beihang University
  • Wenyi Qin Beihang University
  • Yongyi Mao University of Ottawa



SNLP: Text Classification, SNLP: Applications, SNLP: Language Models


Prompt-based Learning has shown significant success in few-shot classification. The mainstream approach is to concatenate a template for the input text to transform the classification task into a cloze-type task where label mapping plays an important role in finding the ground-truth labels. While current label mapping methods only use the contexts in one single input, it could be crucial if wrong information is contained in the text. Specifically, it is proved in recent work that even the large language models like BERT/RoBERTa make classification decisions heavily dependent on a specific keyword regardless of the task or the context. Such a word is referred to as a lexical cue and if a misleading lexical cue is included in the instance it will lead the model to make a wrong prediction. We propose a multi-mask prompt-based approach with Multi-Mask Label Mapping (MMLM) to reduce the impact of misleading lexical cues by allowing the model to exploit multiple lexical cues. To satisfy the conditions of few-shot learning, an instance augmentation approach for the cloze-type model is proposed and the misleading cues are gradually excluded through training. We demonstrate the effectiveness of MMLM by both theoretical analysis and empirical studies, and show that MMLM outperforms other existing label mapping approaches.




How to Cite

Qi, J., Zhang, R., Kim, J., Chen, J., Qin, W., & Mao, Y. (2023). Multi-Mask Label Mapping for Prompt-Based Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13465-13473.



AAAI Technical Track on Speech & Natural Language Processing