Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification

Authors

  • Mozhi Zhang University of Maryland
  • Yoshinari Fujinuma University of Colorado
  • Jordan Boyd-Graber University of Maryland

DOI:

https://doi.org/10.1609/aaai.v34i05.6500

Abstract

Text classification must sometimes be applied in a low-resource language with no labeled training data. However, training data may be available in a related language. We investigate whether character-level knowledge transfer from a related language helps text classification. We present a cross-lingual document classification framework (caco) that exploits cross-lingual subword similarity by jointly training a character-based embedder and a word-based classifier. The embedder derives vector representations for input words from their written forms, and the classifier makes predictions based on the word vectors. We use a joint character representation for both the source language and the target language, which allows the embedder to generalize knowledge about source language words to target language words with similar forms. We propose a multi-task objective that can further improve the model if additional cross-lingual or monolingual resources are available. Experiments confirm that character-level knowledge transfer is more data-efficient than word-level transfer between related languages.

Downloads

Published

2020-04-03

How to Cite

Zhang, M., Fujinuma, Y., & Boyd-Graber, J. (2020). Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9547-9554. https://doi.org/10.1609/aaai.v34i05.6500

Issue

Section

AAAI Technical Track: Natural Language Processing