Decouple Content and Motion for Conditional Image-to-Video Generation

Authors

  • Cuifeng Shen Peking University
  • Yulu Gan Peking University
  • Chen Chen The Chinese academy of science
  • Xiongwei Zhu Kuaishou Technology
  • Lele Cheng Kuaishou Technology
  • Tingting Gao Kuaishou Technology
  • Jinzhi Wang Peking University

DOI:

https://doi.org/10.1609/aaai.v38i5.28277

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Language and Vision

Abstract

The goal of conditional image-to-video (cI2V) generation is to create a believable new video by beginning with the condition, i.e., one image and text. The previous cI2V generation methods conventionally perform in RGB pixel space, with limitations in modeling motion consistency and visual continuity. Additionally, the efficiency of generating videos in pixel space is quite low. In this paper, we propose a novel approach to address these challenges by disentangling the target RGB pixels into two distinct components: spatial content and temporal motions. Specifically, we predict temporal motions which include motion vector and residual based on a 3D-UNet diffusion model. By explicitly modeling temporal motions and warping them to the starting image, we improve the temporal consistency of generated videos. This results in a reduction of spatial redundancy, emphasizing temporal details. Our proposed method achieves performance improvements by disentangling content and motion, all without introducing new structural complexities to the model. Extensive experiments on various datasets confirm our approach's superior performance over the majority of state-of-the-art methods in both effectiveness and efficiency.

Published

2024-03-24

How to Cite

Shen, C., Gan, Y., Chen, C., Zhu, X., Cheng, L., Gao, T., & Wang, J. (2024). Decouple Content and Motion for Conditional Image-to-Video Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4757-4765. https://doi.org/10.1609/aaai.v38i5.28277

Issue

Section

AAAI Technical Track on Computer Vision IV