Scalable Motion Style Transfer with Constrained Diffusion Generation
DOI:
https://doi.org/10.1609/aaai.v38i9.28889Keywords:
HAI: Applications, ML: Transfer, Domain Adaptation, Multi-Task LearningAbstract
Current training of motion style transfer systems relies on consistency losses across style domains to preserve contents, hindering its scalable application to a large number of domains and private data. Recent image transfer works show the potential of independent training on each domain by leveraging implicit bridging between diffusion models, with the content preservation, however, limited to simple data patterns. We address this by imposing biased sampling in backward diffusion while maintaining the domain independence in the training stage. We construct the bias from the source domain keyframes and apply them as the gradient of content constraints, yielding a framework with keyframe manifold constraint gradients (KMCGs). Our validation demonstrates the success of training separate models to transfer between as many as ten dance motion styles. Comprehensive experiments find a significant improvement in preserving motion contents in comparison to baseline and ablative diffusion-based style transfer models. In addition, we perform a human study for a subjective assessment of the quality of generated dance motions. The results validate the competitiveness of KMCGs.Downloads
Published
2024-03-24
How to Cite
Yin, W., Yu, Y., Yin, H., Kragic, D., & Björkman, M. (2024). Scalable Motion Style Transfer with Constrained Diffusion Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(9), 10234-10242. https://doi.org/10.1609/aaai.v38i9.28889
Issue
Section
AAAI Technical Track on Humans and AI