Regional Attention with Architecture-Rebuilt 3D Network for RGB-D Gesture Recognition


  • Benjia Zhou Macau University of Science and Technology
  • Yunan Li Xidian University Xi’an Key Laboratory of Big Data and Intelligent Vision, China
  • Jun Wan NLPR, CASIA School of Artificial Intelligence, University of Chinese Academy of Sciences, China


Biometrics, Face, Gesture & Pose


Human gesture recognition has drawn much attention in the area of computer vision. However, the performance of gesture recognition is always influenced by some gesture-irrelevant factors like the background and the clothes of performers. Therefore, focusing on the regions of hand/arm is important to the gesture recognition. Meanwhile, a more adaptive architecture-searched network structure can also perform better than the block-fixed ones like ResNet since it increases the diversity of features in different stages of the network better. In this paper, we propose a regional attention with architecture-rebuilt 3D network (RAAR3DNet) for gesture recognition. We replace the fixed Inception modules with the automatically rebuilt structure through the network via Neural Architecture Search (NAS), owing to the different shape and representation ability of features in the early, middle, and late stage of the network. It enables the network to capture different levels of feature representations at different layers more adaptively. Meanwhile, we also design a stackable regional attention module called Dynamic-Static Attention (DSA), which derives a Gaussian guidance heatmap and dynamic motion map to highlight the hand/arm regions and the motion information in the spatial and temporal domains, respectively. Extensive experiments on two recent large-scale RGB-D gesture datasets validate the effectiveness of the proposed method and show it outperforms state-of-the-art methods. The codes of our method are available at:




How to Cite

Zhou, B., Li, Y., & Wan, J. (2021). Regional Attention with Architecture-Rebuilt 3D Network for RGB-D Gesture Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3563-3571. Retrieved from



AAAI Technical Track on Computer Vision III