Boosting Short Text Classification with Multi-Source Information Exploration and Dual-Level Contrastive Learning

Authors

  • Yonghao Liu Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University
  • Mengyu Li Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University
  • Wei Pang Mathematical and Computer Sciences, Heriot-Watt University
  • Fausto Giunchiglia University of Trento
  • Lan Huang Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University
  • Xiaoyue Feng Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University
  • Renchu Guan Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University

DOI:

https://doi.org/10.1609/aaai.v39i23.34650

Abstract

Short text classification, as a research subtopic in natural language processing, is more challenging due to its semantic sparsity and insufficient labeled samples in practical scenarios. We propose a novel model named MI-DELIGHT for short text classification in this work. Specifically, it first performs multi-source information (i.e., statistical information, linguistic information, and factual information) exploration to alleviate the sparsity issues. Then, the graph learning approach is adopted to learn the representation of short texts, which are presented in graph forms. Moreover, we introduce a dual-level (i.e., instance-level and cluster-level) contrastive learning auxiliary task to effectively capture different-grained contrastive information within massive unlabeled data. Meanwhile, previous models merely perform the main task and auxiliary tasks in parallel, without considering the relationship among tasks. Therefore, we introduce a hierarchical architecture to explicitly model the correlations between tasks. We conduct extensive experiments across various benchmark datasets, demonstrating that MI-DELIGHT significantly surpasses previous competitive models. It even outperforms popular large language models on several datasets.

Downloads

Published

2025-04-11

How to Cite

Liu, Y., Li, M., Pang, W., Giunchiglia, F., Huang, L., Feng, X., & Guan, R. (2025). Boosting Short Text Classification with Multi-Source Information Exploration and Dual-Level Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24696–24704. https://doi.org/10.1609/aaai.v39i23.34650

Issue

Section

AAAI Technical Track on Natural Language Processing II