Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model

Authors

  • Zhenyu Xie Sun Yat-sen University
  • Yang Wu Tencent AI Lab
  • Xuehao Gao Xi'an Jiao Tong University
  • Zhongqian Sun Tencent AI Lab
  • Wei Yang Tencent AI Lab
  • Xiaodan Liang Sun Yat-sen University DarkMatter AI Research

DOI:

https://doi.org/10.1609/aaai.v38i6.28443

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Multi-modal Vision

Abstract

Text-guided motion synthesis aims to generate 3D human motion that not only precisely reflects the textual description but reveals the motion details as much as possible. Pioneering methods explore the diffusion model for text-to-motion synthesis and obtain significant superiority. However, these methods conduct diffusion processes either on the raw data distribution or the low-dimensional latent space, which typically suffer from the problem of modality inconsistency or detail-scarce. To tackle this problem, we propose a novel Basic-to-Advanced Hierarchical Diffusion Model, named B2A-HDM, to collaboratively exploit low-dimensional and high-dimensional diffusion models for high quality detailed motion synthesis. Specifically, the basic diffusion model in low-dimensional latent space provides the intermediate denoising result that to be consistent with the textual description, while the advanced diffusion model in high-dimensional latent space focuses on the following detail-enhancing denoising process. Besides, we introduce a multi-denoiser framework for the advanced diffusion model to ease the learning of high-dimensional model and fully explore the generative potential of the diffusion model. Quantitative and qualitative experiment results on two text-to-motion benchmarks (HumanML3D and KIT-ML) demonstrate that B2A-HDM can outperform existing state-of-the-art methods in terms of fidelity, modality consistency, and diversity.

Published

2024-03-24

How to Cite

Xie, Z., Wu, Y., Gao, X., Sun, Z., Yang, W., & Liang, X. (2024). Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6252-6260. https://doi.org/10.1609/aaai.v38i6.28443

Issue

Section

AAAI Technical Track on Computer Vision V