Modular-Cam: Modular Dynamic Camera-view Video Generation with LLM

Authors

  • Zirui Pan Department of Computer Science and Technology, Tsinghua University
  • Xin Wang Department of Computer Science and Technology, Tsinghua University Beijing National Research Center for Information Science and Technology, Tsinghua University
  • Yipeng Zhang Department of Computer Science and Technology, Tsinghua University
  • Hong Chen Department of Computer Science and Technology, Tsinghua University
  • Kwan Man Cheng Department of Computer Science and Technology, Tsinghua University
  • Yaofei Wu Beijing University of Technology
  • Wenwu Zhu Department of Computer Science and Technology, Tsinghua University Beijing National Research Center for Information Science and Technology, Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v39i6.32681

Abstract

Text-to-Video generation, which utilizes the provided text prompt to generate high-quality videos, has drawn increasing attention and achieved great success due to the development of diffusion models recently. Existing methods mainly rely on a pre-trained text encoder to capture the semantic information and perform cross attention with the encoded text prompt to guide the generation of video. However, when it comes to complex prompts that contain dynamic scenes and multiple camera-view transformations, these methods can not decompose the overall information into separate scenes, as well as fail to smoothly change scenes based on the corresponding camera-views. To solve these problems, we propose a novel method, i.e., Modular-Cam. Specifically, to better understand a given complex prompt, we utilize a large language model to analyze user instructions and decouple them into multiple scenes together with transition actions. To generate a video containing dynamic scenes that match the given camera-views, we incorporate the widely-used temporal transformer into the diffusion model to ensure continuity within a single scene and propose CamOperator, a modular network based module that well controls the camera movements. Moreover, we propose AdaControlNet, which utilizes ControlNet to ensure consistency across scenes and adaptively adjusts the color tone of the generated video. Extensive qualitative and quantitative experiments prove our proposed Modular-Cam's strong capability of generating multi-scene videos together with its ability to achieve fine-grained control of camera movements. Generated results are available at https://modular-cam.github.io.

Downloads

Published

2025-04-11

How to Cite

Pan, Z., Wang, X., Zhang, Y., Chen, H., Cheng, K. M., Wu, Y., & Zhu, W. (2025). Modular-Cam: Modular Dynamic Camera-view Video Generation with LLM. Proceedings of the AAAI Conference on Artificial Intelligence, 39(6), 6363-6371. https://doi.org/10.1609/aaai.v39i6.32681

Issue

Section

AAAI Technical Track on Computer Vision V