InstructDubber: Instruction-based Alignment for Zero-shot Movie Dubbing

Authors

  • Zhedong Zhang Hangzhou Dianzi University, Hangzhou, China Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
  • Liang Li Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
  • Gaoxiang Cong Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Science
  • Chunshan Liu Hangzhou Dianzi University, Hangzhou, China
  • Yuhan Gao Hangzhou Dianzi University, Hangzhou, China
  • Xiaowan Wang Tsinghua University, Beijing, China
  • Tao Gu Macquarie University, Sydney, Australia
  • Yuankai Qi Macquarie University, Sydney, Australia

DOI:

https://doi.org/10.1609/aaai.v40i15.38298

Abstract

Movie dubbing seeks to synthesize speech from a given script using a specific voice, while ensuring accurate lip synchronization and emotion-prosody alignment with the character’s visual performance. However, existing alignment approaches based on visual features face two key limitations: (1) they rely on complex, handcrafted visual preprocessing pipelines, including facial landmark detection and feature extraction; and (2) they generalize poorly to unseen visual domains, often resulting in degraded alignment and dubbing quality. To address these issues, we propose InstructDubber, a novel instruction-based alignment dubbing method for both robust in-domain and zero-shot movie dubbing. Specifically, we first feed the video, script, and corresponding prompts into a multimodal large language model to generate natural language dubbing instructions regarding the speaking rate and emotion state depicted in the video, which is robust to visual domain variations. Second, we design an instructed duration distilling module to mine discriminative duration cues from speaking rate instructions to predict lip-aligned phoneme-level pronunciation duration. Third, for emotion-prosody alignment, we devise an instructed emotion calibrating module, which fine-tunes an LLM-based instruction analyzer using ground truth dubbing emotion as supervision and predicts prosody based on the calibrated emotion analysis. Finally, the predicted duration and prosody, together with the script, are fed into the audio decoder to generate video-aligned dubbing. Extensive experiments on three major benchmarks demonstrate that InstructDubber outperforms state‑of‑the‑art approaches across both in‑domain and zero‑shot scenarios.

Downloads

Published

2026-03-14

How to Cite

Zhang, Z., Li, L., Cong, G., Liu, C., Gao, Y., Wang, X., … Qi, Y. (2026). InstructDubber: Instruction-based Alignment for Zero-shot Movie Dubbing. Proceedings of the AAAI Conference on Artificial Intelligence, 40(15), 12988–12996. https://doi.org/10.1609/aaai.v40i15.38298

Issue

Section

AAAI Technical Track on Computer Vision XII