Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control

Authors

  • Yu-Wei Chao NVIDIA
  • Jimei Yang Adobe
  • Weifeng Chen University of Michigan, Ann Arbor
  • Jia Deng Princeton University

DOI:

https://doi.org/10.1609/aaai.v35i7.16736

Keywords:

Game Design -- Virtual Humans, NPCs and Autonomous Characters

Abstract

Recent progress on physics-based character animation has shown impressive breakthroughs on human motion synthesis, through imitating motion capture data via deep reinforcement learning. However, results have mostly been demonstrated on imitating a single distinct motion pattern, and do not generalize to interactive tasks that require flexible motion patterns due to varying human-object spatial configurations. To bridge this gap, we focus on one class of interactive tasks---sitting onto a chair. We propose a hierarchical reinforcement learning framework which relies on a collection of subtask controllers trained to imitate simple, reusable mocap motions, and a meta controller trained to execute the subtasks properly to complete the main task. We experimentally demonstrate the strength of our approach over different non-hierarchical and hierarchical baselines. We also show that our approach can be applied to motion prediction given an image input. A supplementary video can be found at https://youtu.be/3CeN0OGz2cA.

Downloads

Published

2021-05-18

How to Cite

Chao, Y.-W., Yang, J., Chen, W., & Deng, J. (2021). Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 5887-5895. https://doi.org/10.1609/aaai.v35i7.16736

Issue

Section

AAAI Technical Track on Humans and AI