Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing

Authors

  • Han Liu Dalian University of Technology
  • Siyang Zhao Dalian University of Technology
  • Xiaotong Zhang Dalian University of Technology
  • Feng Zhang Peking University
  • Wei Wang Shenzhen MSU-BIT University
  • Fenglong Ma The Pennsylvania State University
  • Hongyang Chen Zhejiang Lab
  • Hong Yu Dalian University of Technology
  • Xianchao Zhang Dalian University of Technology

DOI:

https://doi.org/10.1609/aaai.v38i17.29827

Keywords:

NLP: Text Classification, NLP: Applications

Abstract

Few-shot and zero-shot text classification aim to recognize samples from novel classes with limited labeled samples or no labeled samples at all. While prevailing methods have shown promising performance via transferring knowledge from seen classes to unseen classes, they are still limited by (1) Inherent dissimilarities among classes make the transformation of features learned from seen classes to unseen classes both difficult and inefficient. (2) Rare labeled novel samples usually cannot provide enough supervision signals to enable the model to adjust from the source distribution to the target distribution, especially for complicated scenarios. To alleviate the above issues, we propose a simple and effective strategy for few-shot and zero-shot text classification. We aim to liberate the model from the confines of seen classes, thereby enabling it to predict unseen categories without the necessity of training on seen classes. Specifically, for mining more related unseen category knowledge, we utilize a large pre-trained language model to generate pseudo novel samples, and select the most representative ones as category anchors. After that, we convert the multi-class classification task into a binary classification task and use the similarities of query-anchor pairs for prediction to fully leverage the limited supervision signals. Extensive experiments on six widely used public datasets show that our proposed method can outperform other strong baselines significantly in few-shot and zero-shot tasks, even without using any seen class samples.

Published

2024-03-24

How to Cite

Liu, H., Zhao, S., Zhang, X., Zhang, F., Wang, W., Ma, F., Chen, H., Yu, H., & Zhang, X. (2024). Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18644-18652. https://doi.org/10.1609/aaai.v38i17.29827

Issue

Section

AAAI Technical Track on Natural Language Processing II