Label Confusion Learning to Enhance Text Classification Models

Authors

  • Biyang Guo AI Lab, School of Information Management and Engineering, Shanghai University of Finance and Economics
  • Songqiao Han AI Lab, School of Information Management and Engineering, Shanghai University of Finance and Economics
  • Xiao Han AI Lab, School of Information Management and Engineering, Shanghai University of Finance and Economics
  • Hailiang Huang AI Lab, School of Information Management and Engineering, Shanghai University of Finance and Economics
  • Ting Lu AI Lab, School of Information Management and Engineering, Shanghai University of Finance and Economics

DOI:

https://doi.org/10.1609/aaai.v35i14.17529

Keywords:

Text Classification & Sentiment Analysis

Abstract

Representing the true label as one-hot vector is the common practice in training text classification models. However, the one-hot representation may not adequately reflect the relation between the instance and labels, as labels are often not completely independent and instances may relate to multiple labels in practice. The inadequate one-hot representations tend to train the model to be over-confident, which may result in arbitrary prediction and model overfitting, especially for confused datasets (datasets with very similar labels) or noisy datasets (datasets with labeling errors). While training models with label smoothing can ease this problem in some degree, it still fails to capture the realistic relation among labels. In this paper, we propose a novel Label Confusion Model (LCM) as an enhancement component to current popular text classification models. LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instance and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance. Extensive experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models. Further experiments also verify that LCM is especially helpful for confused or noisy datasets and superior to the label smoothing method.

Downloads

Published

2021-05-18

How to Cite

Guo, B., Han, S., Han, X., Huang, H., & Lu, T. (2021). Label Confusion Learning to Enhance Text Classification Models. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12929-12936. https://doi.org/10.1609/aaai.v35i14.17529

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I