DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping

Authors

  • Yifan Zhong Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Xuchuan Huang Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Ruochong Li PKU-PsiBot Joint Lab Hong Kong University of Science and Technology (Guangzhou)
  • Ceyao Zhang Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Zhang Chen Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Tianrui Guan Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Fanlian Zeng PKU-PsiBot Joint Lab University of Pennsylvania
  • Ka Nam Lui Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Yuyao Ye Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Yitao Liang Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Yaodong Yang Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab
  • Yuanpei Chen Institute for Artificial Intelligence, Peking University PKU-PsiBot Joint Lab

DOI:

https://doi.org/10.1609/aaai.v40i22.38953

Abstract

Dexterous grasping remains a fundamental yet challenging problem in robotics. A general-purpose robot must be capable of grasping diverse objects in arbitrary scenarios. However, existing research typically relies on restrictive assumptions, such as single-object settings or limited environments, showing constrained generalization. We present DexGraspVLA, a hierarchical framework for robust generalization in language-guided general dexterous grasping and beyond. It utilizes a pre-trained Vision-Language model as the high-level planner and learns a diffusion-based low-level Action controller. The key insight to achieve generalization lies in iteratively transforming diverse language and visual inputs into domain-invariant representations via foundation models, where imitation learning can be effectively applied due to the alleviation of domain shift. Notably, our method achieves a 90+% dexterous grasping success rate under thousands of challenging unseen cluttered scenes. Empirical analysis confirms the consistency of internal model behavior across environmental variations, validating our design. DexGraspVLA also, for the first time, simultaneously demonstrates free-form long-horizon prompt execution, robustness to adversarial objects and human disturbance, and failure recovery. Extended application to nonprehensile grasping further proves its generality.

Published

2026-03-14

How to Cite

Zhong, Y., Huang, X., Li, R., Zhang, C., Chen, Z., Guan, T., Zeng, F., Lui, K. N., Ye, Y., Liang, Y., Yang, Y., & Chen, Y. (2026). DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping. Proceedings of the AAAI Conference on Artificial Intelligence, 40(22), 18836-18844. https://doi.org/10.1609/aaai.v40i22.38953

Issue

Section

AAAI Technical Track on Intelligent Robotics