TY - JOUR AU - Yang, Pengcheng AU - Chen, Boxing AU - Zhang, Pei AU - Sun, Xu PY - 2020/04/03 Y2 - 2024/03/28 TI - Visual Agreement Regularized Training for Multi-Modal Machine Translation JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 05 SE - AAAI Technical Track: Natural Language Processing DO - 10.1609/aaai.v34i05.6484 UR - https://ojs.aaai.org/index.php/AAAI/article/view/6484 SP - 9418-9425 AB - <p>Multi-modal machine translation aims at translating the source sentence into a different language in the presence of the paired image. Previous work suggests that additional visual information only provides dispensable help to translation, which is needed in several very special cases such as translating ambiguous words. To make better use of visual information, this work presents visual agreement regularized training. The proposed approach jointly trains the source-to-target and target-to-source translation models and encourages them to share the same focus on the visual information when generating semantically equivalent visual words (e.g. <em>“ball”</em> in English and <em>“ballon”</em> in French). Besides, a simple yet effective multi-head co-attention model is also introduced to capture interactions between visual and textual features. The results show that our approaches can outperform competitive baselines by a large margin on the Multi30k dataset. Further analysis demonstrates that the proposed regularized training can effectively improve the agreement of attention on the image, leading to better use of visual information.</p> ER -