Uncertainty-Aware Self-Training for Low-Resource Neural Sequence Labeling
Keywords:SNLP: Information Extraction, SNLP: Applications, SNLP: Language Models, SNLP: Text Mining
AbstractNeural sequence labeling (NSL) aims at assigning labels for input language tokens, which covers a broad range of applications, such as named entity recognition (NER) and slot filling, etc. However, the satisfying results achieved by traditional supervised-based approaches heavily depend on the large amounts of human annotation data, which may not be feasible in real-world scenarios due to data privacy and computation efficiency issues. This paper presents SeqUST, a novel uncertain-aware self-training framework for NSL to address the labeled data scarcity issue and to effectively utilize unlabeled data. Specifically, we incorporate Monte Carlo (MC) dropout in Bayesian neural network (BNN) to perform uncertainty estimation at the token level and then select reliable language tokens from unlabeled data based on the model confidence and certainty. A well-designed masked sequence labeling task with a noise-robust loss supports robust training, which aims to suppress the problem of noisy pseudo labels. In addition, we develop a Gaussian-based consistency regularization technique to further improve the model robustness on Gaussian-distributed perturbed representations. This effectively alleviates the over-fitting dilemma originating from pseudo-labeled augmented data. Extensive experiments over six benchmarks demonstrate that our SeqUST framework effectively improves the performance of self-training, and consistently outperforms strong baselines by a large margin in low-resource scenarios.
How to Cite
Wang, J., Wang, C., Huang, J., Gao, M., & Zhou, A. (2023). Uncertainty-Aware Self-Training for Low-Resource Neural Sequence Labeling. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13682-13690. https://doi.org/10.1609/aaai.v37i11.26603
AAAI Technical Track on Speech & Natural Language Processing