Prototypical Fine-Tuning: Towards Robust Performance under Varying Data Sizes
DOI:
https://doi.org/10.1609/aaai.v37i11.26524Keywords:
SNLP: Text Classification, SNLP: Bias, Fairness, Transparency & Privacy, SNLP: Interpretability & Analysis of NLP Models, SNLP: Language ModelsAbstract
In this paper, we move towards combining large parametric models with non-parametric prototypical networks. We propose prototypical fine-tuning, a novel prototypical framework for fine-tuning pretrained language models (LM), which automatically learns a bias to improve predictive performance for varying data sizes, especially low-resource settings. Our prototypical fine-tuning approach can automatically adjust the model capacity according to the number of data points and the model's inherent attributes. Moreover, we propose four principles for effective prototype fine-tuning towards the optimal solution. Experimental results across various datasets show that our work achieves significant performance improvements under various low-resource settings, as well as comparable and usually better performances in high-resource scenarios.Downloads
Published
2023-06-26
How to Cite
Jin, Y., Wang, X., Hao, Y., Sun, Y., & Xie, X. (2023). Prototypical Fine-Tuning: Towards Robust Performance under Varying Data Sizes. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12968-12976. https://doi.org/10.1609/aaai.v37i11.26524
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing