GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction

Authors

  • Wasi Uddin Ahmad University of California, Los Angeles
  • Nanyun Peng University of California, Los Angeles
  • Kai-Wei Chang University of California, Los Angeles

DOI:

https://doi.org/10.1609/aaai.v35i14.17478

Keywords:

Information Extraction

Abstract

Recent progress in cross-lingual relation and event extraction use graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations such that models trained on one language can be applied to other languages. However, GCNs struggle to model words with long-range dependencies or are not directly connected in the dependency tree. To address these challenges, we propose to utilize the self-attention mechanism where we explicitly fuse structural information to learn the dependencies between words with different syntactic distances. We introduce GATE, a Graph Attention Transformer Encoder, and test its cross-lingual transferability on relation and event extraction tasks. We perform experiments on the ACE05 dataset that includes three typologically different languages: English, Chinese, and Arabic. The evaluation results show that GATE outperforms three recently proposed methods by a large margin. Our detailed analysis reveals that due to the reliance on syntactic dependencies, GATE produces robust representations that facilitate transfer across languages.

Downloads

Published

2021-05-18

How to Cite

Ahmad, W. U., Peng, N., & Chang, K.-W. (2021). GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12462-12470. https://doi.org/10.1609/aaai.v35i14.17478

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I