FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for Depth Completion

Authors

  • Lina Liu Institute of Cyber-Systems and Control, Zhejiang University, China Baidu Research, China
  • Xibin Song Baidu Research, China National Engineering Laboratory of Deep Learning Technology and Application, China
  • Xiaoyang Lyu Institute of Cyber-Systems and Control, Zhejiang University, China
  • Junwei Diao Institute of Cyber-Systems and Control, Zhejiang University, China
  • Mengmeng Wang Institute of Cyber-Systems and Control, Zhejiang University, China
  • Yong Liu Institute of Cyber-Systems and Control, Zhejiang University, China
  • Liangjun Zhang Baidu Research, China National Engineering Laboratory of Deep Learning Technology and Application, China

Keywords:

3D Computer Vision

Abstract

Depth completion aims to recover a dense depth map from a sparse depth map with the corresponding color image as input. Recent approaches mainly formulate the depth completion as a one-stage end-to-end learning task, which outputs dense depth maps directly. However, the feature extraction and supervision in one-stage frameworks are insufficient, limiting the performance of these approaches. To address this problem, we propose a novel end-to-end residual learning framework, which formulates the depth completion as a two-stage learning task, i.e., a sparse-to-coarse stage and a coarse-to-fine stage. First, a coarse dense depth map is obtained by a simple CNN framework. Then, a refined depth map is further obtained using a residual learning strategy in the coarse-to-fine stage with coarse depth map and color image as input. Specially, in the coarse-to-fine stage, a channel shuffle extraction operation is utilized to extract more representative features from color image and coarse depth map, and an energy based fusion operation is exploited to effectively fuse these features obtained by channel shuffle operation, thus leading to more accurate and refined depth maps. We achieve SoTA performance in RMSE on KITTI benchmark. Extensive experiments on other datasets future demonstrate the superiority of our approach over current state-of-the-art depth completion approaches.

Downloads

Published

2021-05-18

How to Cite

Liu, L., Song, X., Lyu, X., Diao, J., Wang, M., Liu, Y., & Zhang, L. (2021). FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for Depth Completion. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2136-2144. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16311

Issue

Section

AAAI Technical Track on Computer Vision II