Towards Affordance-Aware Robotic Dexterous Grasping with Human-like Priors

Authors

  • Haoyu Zhao Wuhan University DAMO Academy, Alibaba Group
  • Linghao Zhuang Wuhan University
  • Xingyue Zhao DAMO Academy, Alibaba Group
  • Cheng Zeng Tsinghua University
  • Haoran Xu Zhejiang University
  • Yuming Jiang DAMO Academy, Alibaba Group
  • Jun Cen DAMO Academy, Alibaba Group Hupan Lab Zhejiang University
  • Kexiang Wang DAMO Academy, Alibaba Group
  • Jiayan Guo DAMO Academy, Alibaba Group
  • Siteng Huang DAMO Academy, Alibaba Group Hupan Lab Zhejiang University
  • Xin Li DAMO Academy, Alibaba Group Hupan Lab
  • Deli Zhao DAMO Academy, Alibaba Group Hupan Lab
  • Hua Zou Wuhan University

DOI:

https://doi.org/10.1609/aaai.v40i15.38313

Abstract

A dexterous hand capable of generalizable grasping objects is fundamental for the development of general-purpose embodied AI. However, previous methods focus narrowly on low-level grasp stability metrics, neglecting affordance-aware positioning and human-like poses which are crucial for downstream manipulation. To address these limitations, we propose AffordDex, a novel framework with two-stage training that learns a universal grasping policy with an inherent understanding of both motion priors and object affordances. In the first stage, a trajectory imitator is pre-trained on a large corpus of human hand motions to instill a strong prior for natural movement. In the second stage, a residual module is trained to adapt these general human-like motions to specific object instances. This refinement is critically guided by two components: our Negative Affordance-aware Segmentation (NAA) module, which identifies functionally inappropriate contact regions, and a privileged teacher-student distillation process that ensures the final vision-based policy is highly successful. Extensive experiments demonstrate that AffordDex not only achieves universal dexterous grasping but also remains remarkably human-like in posture and functionally appropriate in contact location. As a result, AffordDex significantly outperforms state-of-the-art baselines across seen objects, unseen instances, and even entirely novel categories.

Downloads

Published

2026-03-14

How to Cite

Zhao, H., Zhuang, L., Zhao, X., Zeng, C., Xu, H., Jiang, Y., … Zou, H. (2026). Towards Affordance-Aware Robotic Dexterous Grasping with Human-like Priors. Proceedings of the AAAI Conference on Artificial Intelligence, 40(15), 13126–13134. https://doi.org/10.1609/aaai.v40i15.38313

Issue

Section

AAAI Technical Track on Computer Vision XII