Unsupervised Domain Adaptation on Reading Comprehension

Authors

  • Yu Cao The University of Sydney
  • Meng Fang University of Waikato
  • Baosheng Yu The University of Sydney
  • Joey Tianyi Zhou A*STAR

DOI:

https://doi.org/10.1609/aaai.v34i05.6245

Abstract

Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate the problem, we investigate unsupervised domain adaptation on RC, wherein a model is trained on the labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, a model can not generalize well from one domain to another. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable performance to supervised models on multiple large-scale benchmark datasets.

Downloads

Published

2020-04-03

How to Cite

Cao, Y., Fang, M., Yu, B., & Zhou, J. T. (2020). Unsupervised Domain Adaptation on Reading Comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7480-7487. https://doi.org/10.1609/aaai.v34i05.6245

Issue

Section

AAAI Technical Track: Natural Language Processing