Compositional Generalization for Multi-Label Text Classification: A Data-Augmentation Approach
DOI:
https://doi.org/10.1609/aaai.v38i16.29725Keywords:
NLP: Text Classification, NLP: GenerationAbstract
Despite significant advancements in multi-label text classification, the ability of existing models to generalize to novel and seldom-encountered complex concepts, which are compositions of elementary ones, remains underexplored. This research addresses this gap. By creating unique data splits across three benchmarks, we assess the compositional generalization ability of existing multi-label text classification models. Our results show that these models often fail to generalize to compositional concepts encountered infrequently during training, leading to inferior performance on tests with these new combinations. To address this, we introduce a data augmentation method that leverages two innovative text generation models designed to enhance the classification models' capacity for compositional generalization. Our experiments show that this data augmentation approach significantly improves the compositional generalization capabilities of classification models on our benchmarks, with both generation models surpassing other text generation baselines. Our codes available at https://github.com/yychai74/LD-VAE.Downloads
Published
2024-03-24
How to Cite
Chai, Y., Li, Z., Liu, J., Chen, L., Li, F., Ji, D., & Teng, C. (2024). Compositional Generalization for Multi-Label Text Classification: A Data-Augmentation Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17727-17735. https://doi.org/10.1609/aaai.v38i16.29725
Issue
Section
AAAI Technical Track on Natural Language Processing I