Relightable and Animatable Neural Avatars from Videos

Authors

  • Wenbin Lin School of Software and BNRist, Tsinghua University
  • Chengwei Zheng School of Software and BNRist, Tsinghua University
  • Jun-Hai Yong School of Software and BNRist, Tsinghua University
  • Feng Xu School of Software and BNRist, Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v38i4.28136

Keywords:

CV: 3D Computer Vision, CV: Biometrics, Face, Gesture & Pose

Abstract

Lightweight creation of 3D digital avatars is a highly desirable but challenging task. With only sparse videos of a person under unknown illumination, we propose a method to create relightable and animatable neural avatars, which can be used to synthesize photorealistic images of humans under novel viewpoints, body poses, and lighting. The key challenge here is to disentangle the geometry, material of the clothed body, and lighting, which becomes more difficult due to the complex geometry and shadow changes caused by body motions. To solve this ill-posed problem, we propose novel techniques to better model the geometry and shadow changes. For geometry change modeling, we propose an invertible deformation field, which helps to solve the inverse skinning problem and leads to better geometry quality. To model the spatial and temporal varying shading cues, we propose a pose-aware part-wise light visibility network to estimate light occlusion. Extensive experiments on synthetic and real datasets show that our approach reconstructs high-quality geometry and generates realistic shadows under different body poses. Code and data are available at https://wenbin-lin.github.io/RelightableAvatar-page.

Published

2024-03-24

How to Cite

Lin, W., Zheng, C., Yong, J.-H., & Xu, F. (2024). Relightable and Animatable Neural Avatars from Videos. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3486-3494. https://doi.org/10.1609/aaai.v38i4.28136

Issue

Section

AAAI Technical Track on Computer Vision III