Dual Adversarial Co-Learning for Multi-Domain Text Classification


  • Yuan Wu Carleton University
  • Yuhong Guo Carleton University




With the advent of deep learning, the performance of text classification models have been improved significantly. Nevertheless, the successful training of a good classification model requires a sufficient amount of labeled data, while it is always expensive and time consuming to annotate data. With the rapid growth of digital data, similar classification tasks can typically occur in multiple domains, while the availability of labeled data can largely vary across domains. Some domains may have abundant labeled data, while in some other domains there may only exist a limited amount (or none) of labeled data. Meanwhile text classification tasks are highly domain-dependent — a text classifier trained in one domain may not perform well in another domain. In order to address these issues, in this paper we propose a novel dual adversarial co-learning approach for multi-domain text classification (MDTC). The approach learns shared-private networks for feature extraction and deploys dual adversarial regularizations to align features across different domains and between labeled and unlabeled data simultaneously under a discrepancy based co-learning framework, aiming to improve the classifiers' generalization capacity with the learned features. We conduct experiments on multi-domain sentiment classification datasets. The results show the proposed approach achieves the state-of-the-art MDTC performance.




How to Cite

Wu, Y., & Guo, Y. (2020). Dual Adversarial Co-Learning for Multi-Domain Text Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6438-6445. https://doi.org/10.1609/aaai.v34i04.6115



AAAI Technical Track: Machine Learning