MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels

Authors

  • Taeryung Lee Seoul National University
  • Gyeongsik Moon Meta Reality Labs Research
  • Kyoung Mu Lee Seoul National University

DOI:

https://doi.org/10.1609/aaai.v37i1.25206

Keywords:

CV: Biometrics, Face, Gesture & Pose, CV: Motion & Tracking

Abstract

We tackle the problem of generating long-term 3D human motion from multiple action labels. Two main previous approaches, such as action- and motion-conditioned methods, have limitations to solve this problem. The action-conditioned methods generate a sequence of motion from a single action. Hence, it cannot generate long-term motions composed of multiple actions and transitions between actions. Meanwhile, the motion-conditioned methods generate future motions from initial motion. The generated future motions only depend on the past, so they are not controllable by the user's desired actions. We present MultiAct, the first framework to generate long-term 3D human motion from multiple action labels. MultiAct takes account of both action and motion conditions with a unified recurrent generation system. It repetitively takes the previous motion and action label; then, it generates a smooth transition and the motion of the given action. As a result, MultiAct produces realistic long-term motion controlled by the given sequence of multiple action labels. The code is publicly available in https://github.com/TaeryungLee/MultiAct RELEASE.

Downloads

Published

2023-06-26

How to Cite

Lee, T., Moon, G., & Lee, K. M. (2023). MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 1231-1239. https://doi.org/10.1609/aaai.v37i1.25206

Issue

Section

AAAI Technical Track on Computer Vision I