Learning Object-Centric Motion Priors from Human for Robotic Dexterous Manipulation

Authors

  • Zhengdong Hong Zhejiang University
  • Guofeng Zhang Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v40i22.38892

Abstract

Manipulating diverse objects with multi-fingered dexterous hands is challenging due to the high dimensionality and complex dynamics. Human-Object Interaction (HOI) datasets provide rich knowledge about task information and embodied interactions. Instead of solely imitating the human demonstrations, our method learns to holistically predict future hand-object states by leveraging these datasets. The predicted future states of the object can serve as a general-purpose reward term for reinforcement learning, reducing reliance on task-specific reward engineering and enhancing generalization across tasks. We conduct extensive experiments on three manipulation tasks in simulation and the real world. Our approach outperforms existing SOTA methods in both success rate and generalizability on novel objects. Furthermore, we validate the cross-embodiment compatibility of our methods by successfully deploying the skills on different robot hands.

Published

2026-03-14

How to Cite

Hong, Z., & Zhang, G. (2026). Learning Object-Centric Motion Priors from Human for Robotic Dexterous Manipulation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(22), 18288–18296. https://doi.org/10.1609/aaai.v40i22.38892

Issue

Section

AAAI Technical Track on Intelligent Robotics