Neighborhood-Regularized Self-Training for Learning with Few Labels


  • Ran Xu Emory University
  • Yue Yu Georgia Institute of Technology
  • Hejie Cui Emory University
  • Xuan Kan Emory University
  • Yanqiao Zhu University of California, Los Angeles
  • Joyce Ho Emory University
  • Chao Zhang Georgia Institute of Technology
  • Carl Yang Emory University



ML: Semi-Supervised Learning, APP: Bioinformatics, SNLP: Text Classification


Training deep neural networks (DNNs) with limited supervision has been a popular research topic as it can significantly alleviate the annotation burden. Self-training has been successfully applied in semi-supervised learning tasks, but one drawback of self-training is that it is vulnerable to the label noise from incorrect pseudo labels. Inspired by the fact that samples with similar labels tend to share similar representations, we develop a neighborhood-based sample selection approach to tackle the issue of noisy pseudo labels. We further stabilize self-training via aggregating the predictions from different rounds during sample selection. Experiments on eight tasks show that our proposed method outperforms the strongest self-training baseline with 1.83% and 2.51% performance gain for text and graph datasets on average. Our further analysis demonstrates that our proposed data selection strategy reduces the noise of pseudo labels by 36.8% and saves 57.3% of the time when compared with the best baseline. Our code and appendices will be uploaded to:




How to Cite

Xu, R., Yu, Y., Cui, H., Kan, X., Zhu, Y., Ho, J., Zhang, C., & Yang, C. (2023). Neighborhood-Regularized Self-Training for Learning with Few Labels. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10611-10619.



AAAI Technical Track on Machine Learning IV