Finding Sparse Structures for Domain Specific Neural Machine Translation
Keywords:Machine Translation & Multilinguality
AbstractNeural machine translation often adopts the fine-tuning approach to adapt to specific domains. However, nonrestricted fine-tuning can easily degrade on the general domain and over-fit to the target domain. To mitigate the issue, we propose Prune-Tune, a novel domain adaptation method via gradual pruning. It learns tiny domain-specific sub-networks during fine-tuning on new domains. Prune-Tune alleviates the over-fitting and the degradation problem without model modification. Furthermore, Prune-Tune is able to sequentially learn a single network with multiple disjoint domain-specific sub-networks for multiple domains. Empirical experiment results show that Prune-Tune outperforms several strong competitors in the target domain test set without sacrificing the quality on the general domain in both single and multi-domain settings. The source code and data are available at https://github.com/ohlionel/Prune-Tune.
How to Cite
Liang, J., Zhao, C., Wang, M., Qiu, X., & Li, L. (2021). Finding Sparse Structures for Domain Specific Neural Machine Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13333-13342. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17574
AAAI Technical Track on Speech and Natural Language Processing II