Self-supervised Bilingual Syntactic Alignment for Neural Machine Translation
Keywords:Machine Translation & Multilinguality
AbstractWhile various neural machine translation (NMT) methods have integrated mono-lingual syntax knowledge into the linguistic representation of sequence-to-sequence, no research is available on aligning the syntactic structures of target language with the corresponding source language syntactic structures. This work shows the first attempt of a source-target bilingual syntactic alignment approach SyntAligner by mutual information maximization-based self-supervised neural deep modeling. Building on the word alignment for NMT, our SyntAligner firstly aligns the syntactic structures of source and target sentences and then maximizes their mutual dependency by introducing a lower bound on their mutual information. In SyntAligner, the syntactic structure of span granularity is represented by transforming source or target word hidden state into a source or target syntactic span vector. A border-sensitive span attention mechanism then captures the correlation between the source and target syntactic span vectors, which also captures the self-attention between span border-words as alignment bias. Lastly, a self-supervised bilingual syntactic mutual information maximization-based learning objective dynamically samples the aligned syntactic spans to maximize their mutual dependency. Experiment results on three typical NMT tasks: WMT'14 English to German, IWSLT'14 German to English, and NC'11 English to French show the SyntAligner effectiveness and universality of syntactic alignment.
How to Cite
Zhang, T., Huang, H., Feng, C., & Cao, L. (2021). Self-supervised Bilingual Syntactic Alignment for Neural Machine Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14454-14462. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17699
AAAI Technical Track on Speech and Natural Language Processing III