Occupancy Planes for Single-View RGB-D Human Reconstruction

Authors

  • Xiaoming Zhao University of Illinois at Urbana-Champaign
  • Yuan-Ting Hu University of Illinois at Urbana-Champaign
  • Zhongzheng Ren University of Illinois at Urbana-Champaign
  • Alexander G. Schwing University of Illinois at Urbana-Champaign

DOI:

https://doi.org/10.1609/aaai.v37i3.25474

Keywords:

CV: 3D Computer Vision

Abstract

Single-view RGB-D human reconstruction with implicit functions is often formulated as per-point classification. Specifically, a set of 3D locations within the view-frustum of the camera are first projected independently onto the image and a corresponding feature is subsequently extracted for each 3D location. The feature of each 3D location is then used to classify independently whether the corresponding 3D point is inside or outside the observed object. This procedure leads to sub-optimal results because correlations between predictions for neighboring locations are only taken into account implicitly via the extracted features. For more accurate results we propose the occupancy planes (OPlanes) representation, which enables to formulate single-view RGB-D human reconstruction as occupancy prediction on planes which slice through the camera's view frustum. Such a representation provides more flexibility than voxel grids and enables to better leverage correlations than per-point classification. On the challenging S3D data we observe a simple classifier based on the OPlanes representation to yield compelling results, especially in difficult situations with partial occlusions due to other objects and partial visibility, which haven't been addressed by prior work.

Downloads

Published

2023-06-26

How to Cite

Zhao, X., Hu, Y.-T., Ren, Z., & Schwing, A. G. (2023). Occupancy Planes for Single-View RGB-D Human Reconstruction. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3633-3641. https://doi.org/10.1609/aaai.v37i3.25474

Issue

Section

AAAI Technical Track on Computer Vision III