Deep Digging into the Generalization of Self-Supervised Monocular Depth Estimation

Authors

  • Jinwoo Bae DGIST
  • Sungho Moon DGIST
  • Sunghoon Im DGIST

DOI:

https://doi.org/10.1609/aaai.v37i1.25090

Keywords:

CV: 3D Computer Vision, CV: Adversarial Attacks & Robustness, CV: Applications, CV: Vision for Robotics & Autonomous Driving

Abstract

Self-supervised monocular depth estimation has been widely studied recently. Most of the work has focused on improving performance on benchmark datasets, such as KITTI, but has offered a few experiments on generalization performance. In this paper, we investigate the backbone networks (e.g., CNNs, Transformers, and CNN-Transformer hybrid models) toward the generalization of monocular depth estimation. We first evaluate state-of-the-art models on diverse public datasets, which have never been seen during the network training. Next, we investigate the effects of texture-biased and shape-biased representations using the various texture-shifted datasets that we generated. We observe that Transformers exhibit a strong shape bias and CNNs do a strong texture-bias. We also find that shape-biased models show better generalization performance for monocular depth estimation compared to texture-biased models. Based on these observations, we newly design a CNN-Transformer hybrid network with a multi-level adaptive feature fusion module, called MonoFormer. The design intuition behind MonoFormer is to increase shape bias by employing Transformers while compensating for the weak locality bias of Transformers by adaptively fusing multi-level representations. Extensive experiments show that the proposed method achieves state-of-the-art performance with various public datasets. Our method also shows the best generalization ability among the competitive methods.

Downloads

Published

2023-06-26

How to Cite

Bae, J., Moon, S., & Im, S. (2023). Deep Digging into the Generalization of Self-Supervised Monocular Depth Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 187-196. https://doi.org/10.1609/aaai.v37i1.25090

Issue

Section

AAAI Technical Track on Computer Vision I