Fine-Grained Multi-View Hand Reconstruction Using Inverse Rendering
DOI:
https://doi.org/10.1609/aaai.v38i3.27946Keywords:
CV: 3D Computer VisionAbstract
Reconstructing high-fidelity hand models with intricate textures plays a crucial role in enhancing human-object interaction and advancing real-world applications. Despite the state-of-the-art methods excelling in texture generation and image rendering, they often face challenges in accurately capturing geometric details. Learning-based approaches usually offer better robustness and faster inference, which tend to produce smoother results and require substantial amounts of training data. To address these issues, we present a novel fine-grained multi-view hand mesh reconstruction method that leverages inverse rendering to restore hand poses and intricate details. Firstly, our approach predicts a parametric hand mesh model through Graph Convolutional Networks (GCN) based method from multi-view images. We further introduce a novel Hand Albedo and Mesh (HAM) optimization module to refine both the hand mesh and textures, which is capable of preserving the mesh topology. In addition, we suggest an effective mesh-based neural rendering scheme to simultaneously generate photo-realistic image and optimize mesh geometry by fusing the pre-trained rendering network with vertex features. We conduct the comprehensive experiments on InterHand2.6M, DeepHandMesh and dataset collected by ourself, whose promising results show that our proposed approach outperforms the state-of-the-art methods on both reconstruction accuracy and rendering quality. Code and dataset are publicly available at https://github.com/agnJason/FMHR.Downloads
Published
2024-03-24
How to Cite
Gan, Q., Li, W., Ren, J., & Zhu, J. (2024). Fine-Grained Multi-View Hand Reconstruction Using Inverse Rendering. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1779-1787. https://doi.org/10.1609/aaai.v38i3.27946
Issue
Section
AAAI Technical Track on Computer Vision II