A Representation Learning Framework for Multi-Source Transfer Parsing

Authors

  • Jiang Guo Harbin Institute of Technology
  • Wanxiang Che Harbin Institute of Technology
  • David Yarowsky Johns Hopkins University
  • Haifeng Wang Baidu Inc.
  • Ting Liu Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v30i1.10352

Keywords:

Natural Language Processing, Representation Learning, Multilingual Learning, Dependency Parsing

Abstract

Cross-lingual model transfer has been a promising approach for inducing dependency parsers for low-resource languages where annotated treebanks are not available. The major obstacles for the model transfer approach are two-fold: 1. Lexical features are not directly transferable across languages; 2. Target language-specific syntactic structures are difficult to be recovered. To address these two challenges, we present a novel representation learning framework for multi-source transfer parsing. Our framework allows multi-source transfer parsing using full lexical features straightforwardly. By evaluating on the Google universal dependency treebanks (v2.0), our best models yield an absolute improvement of 6.53% in averaged labeled attachment score, as compared with delexicalized multi-source transfer models. We also significantly outperform the state-of-the-art transfer system proposed most recently.

Downloads

Published

2016-03-05

How to Cite

Guo, J., Che, W., Yarowsky, D., Wang, H., & Liu, T. (2016). A Representation Learning Framework for Multi-Source Transfer Parsing. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10352

Issue

Section

Technical Papers: NLP and Machine Learning