Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers

Authors

  • Payam Karisani Emory University

DOI:

https://doi.org/10.1609/aaai.v36i7.20668

Keywords:

Machine Learning (ML), Speech & Natural Language Processing (SNLP)

Abstract

We present a novel multiple-source unsupervised model for text classification under domain shift. Our model exploits the update rates in document representations to dynamically integrate domain encoders. It also employs a probabilistic heuristic to infer the error rate in the target domain in order to pair source classifiers. Our heuristic exploits data transformation cost and the classifier accuracy in the target feature space. We have used real world scenarios of Domain Adaptation to evaluate the efficacy of our algorithm. We also used pretrained multi-layer transformers as the document encoder in the experiments to demonstrate whether the improvement achieved by domain adaptation models can be delivered by out-of-the-box language model pretraining. The experiments testify that our model is the top performing approach in this setting.

Downloads

Published

2022-06-28

How to Cite

Karisani, P. (2022). Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7087-7095. https://doi.org/10.1609/aaai.v36i7.20668

Issue

Section

AAAI Technical Track on Machine Learning II