Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control

Authors

  • Zunnan Xu Tsinghua University
  • Yachao Zhang Tsinghua University
  • Sicheng Yang Tsinghua University
  • Ronghui Li Tsinghua University
  • Xiu Li Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v38i6.28458

Keywords:

CV: Biometrics, Face, Gesture & Pose, CV: Multi-modal Vision, HAI: Human-Computer Interaction

Abstract

This study aims to improve the generation of 3D gestures by utilizing multimodal information from human speech. Previous studies have focused on incorporating additional modalities to enhance the quality of generated gestures. However, these methods perform poorly when certain modalities are missing during inference. To address this problem, we suggest using speech-derived multimodal priors to improve gesture generation. We introduce a novel method that separates priors from speech and employs multimodal priors as constraints for generating gestures. Our approach utilizes a chain-like modeling method to generate facial blendshapes, body movements, and hand gestures sequentially. Specifically, we incorporate rhythm cues derived from facial deformation and stylization prior based on speech emotions, into the process of generating gestures. By incorporating multimodal priors, our method improves the quality of generated gestures and eliminate the need for expensive setup preparation during inference. Extensive experiments and user studies confirm that our proposed approach achieves state-of-the-art performance.

Published

2024-03-24

How to Cite

Xu, Z., Zhang, Y., Yang, S., Li, R., & Li, X. (2024). Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6387-6395. https://doi.org/10.1609/aaai.v38i6.28458

Issue

Section

AAAI Technical Track on Computer Vision V