Towards Fully Automated Manga Translation
Keywords:Machine Translation & Multilinguality, Entertainment, Language and Vision
AbstractWe tackle the problem of machine translation of manga, Japanese comics. Manga translation involves two important problems in machine translation: context-aware and multimodal translation. Since text and images are mixed up in an unstructured fashion in Manga, obtaining context from the image is essential for manga translation. However, it is still an open problem how to extract context from image and integrate into MT models. In addition, corpus and benchmarks to train and evaluate such model is currently unavailable. In this paper, we make the following four contributions that establishes the foundation of manga translation research. First, we propose multimodal context-aware translation framework. We are the first to incorporate context information obtained from manga image. It enables us to translate texts in speech bubbles that cannot be translated without using context information (e.g., texts in other speech bubbles, gender of speakers, etc.). Second, for training the model, we propose the approach to automatic corpus construction from pairs of original manga and their translations, by which large parallel corpus can be constructed without any manual labeling. Third, we created a new benchmark to evaluate manga translation. Finally, on top of our proposed methods, we devised a first compleheisive system for fully automated manga translation.
How to Cite
Hinami, R., Ishiwatari, S., Yasuda, K., & Matsui, Y. (2021). Towards Fully Automated Manga Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12998-13008. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17537
AAAI Technical Track on Speech and Natural Language Processing I