Compositional Generalization for Multi-Label Text Classification: A Data-Augmentation Approach


  • Yuyang Chai Wuhan University
  • Zhuang Li Monash University
  • Jiahui Liu Wuhan University
  • Lei Chen Wuhan University
  • Fei Li Wuhan University
  • Donghong Ji Wuhan University
  • Chong Teng Wuhan University



NLP: Text Classification, NLP: Generation


Despite significant advancements in multi-label text classification, the ability of existing models to generalize to novel and seldom-encountered complex concepts, which are compositions of elementary ones, remains underexplored. This research addresses this gap. By creating unique data splits across three benchmarks, we assess the compositional generalization ability of existing multi-label text classification models. Our results show that these models often fail to generalize to compositional concepts encountered infrequently during training, leading to inferior performance on tests with these new combinations. To address this, we introduce a data augmentation method that leverages two innovative text generation models designed to enhance the classification models' capacity for compositional generalization. Our experiments show that this data augmentation approach significantly improves the compositional generalization capabilities of classification models on our benchmarks, with both generation models surpassing other text generation baselines. Our codes available at



How to Cite

Chai, Y., Li, Z., Liu, J., Chen, L., Li, F., Ji, D., & Teng, C. (2024). Compositional Generalization for Multi-Label Text Classification: A Data-Augmentation Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17727-17735.



AAAI Technical Track on Natural Language Processing I