On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning
DOI:
https://doi.org/10.1609/aaai.v38i16.29800Keywords:
NLP: Text Classification, ML: Transfer, Domain Adaptation, Multi-Task LearningAbstract
To date, a backbone of methods for unsupervised domain adaptation (UDA) involves learning label-discriminative features via a label classifier and domain-invariant features through a domain discriminator in an adversarial scheme. However, these methods lack explicit control for aligning the source data and target data within the same label class, degrading the classifier's performance in the target domain. In this paper, we propose PL-Mix, a pseudo label guided Mixup method based on adversarial prompt tuning. Specifically, our PL-Mix facilitates class-dependent alignment and can alleviate the impact of noisy pseudo-labels. We then theoretically justify that PL-Mix can improve the generalization for UDA. Extensive experiments of the comparison with existing models also demonstrate the effectiveness of PL-Mix.Downloads
Published
2024-03-24
How to Cite
Kong, F., Zhang, R., Wang, Z., & Mao, Y. (2024). On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 18399-18407. https://doi.org/10.1609/aaai.v38i16.29800
Issue
Section
AAAI Technical Track on Natural Language Processing I