ACDNet: Adaptively Combined Dilated Convolution for Monocular Panorama Depth Estimation

Authors

  • Chuanqing Zhuang University of Chinese Academy of Sciences
  • Zhengda Lu University of Chinese Academy of Sciences
  • Yiqun Wang Chongqing University KAUST
  • Jun Xiao University of Chinese Academy of Sciences
  • Ying Wang University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v36i3.20278

Keywords:

Computer Vision (CV)

Abstract

Depth estimation is a crucial step for 3D reconstruction with panorama images in recent years. Panorama images maintain the complete spatial information but introduce distortion with equirectangular projection. In this paper, we propose an ACDNet based on the adaptively combined dilated convolution to predict the dense depth map for a monocular panoramic image. Specifically, we combine the convolution kernels with different dilations to extend the receptive field in the equirectangular projection. Meanwhile, we introduce an adaptive channel-wise fusion module to summarize the feature maps and get diverse attention areas in the receptive field along the channels. Due to the utilization of channel-wise attention in constructing the adaptive channel-wise fusion module, the network can capture and leverage the cross-channel contextual information efficiently. Finally, we conduct depth estimation experiments on three datasets (both virtual and real-world) and the experimental results demonstrate that our proposed ACDNet substantially outperforms the current state-of-the-art (SOTA) methods. Our codes and model parameters are accessed in https://github.com/zcq15/ACDNet.

Downloads

Published

2022-06-28

How to Cite

Zhuang, C., Lu, Z., Wang, Y., Xiao, J., & Wang, Y. (2022). ACDNet: Adaptively Combined Dilated Convolution for Monocular Panorama Depth Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3653-3661. https://doi.org/10.1609/aaai.v36i3.20278

Issue

Section

AAAI Technical Track on Computer Vision III