Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency

Authors

  • Seokju Lee Korea Advanced Institute of Science and Technology (KAIST)
  • Sunghoon Im Daegu Gyeongbuk Institute of Science and Technology (DGIST)
  • Stephen Lin Microsoft Research
  • In So Kweon Korea Advanced Institute of Science and Technology (KAIST)

DOI:

https://doi.org/10.1609/aaai.v35i3.16281

Keywords:

3D Computer Vision, Applications, Vision for Robotics & Autonomous Driving

Abstract

We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion, and depth in a monocular camera setup without supervision. Our technical contributions are three-fold. First, we highlight the fundamental difference between inverse and forward projection while modeling the individual motion of each rigid object, and propose a geometrically correct projection pipeline using a neural forward projection module. Second, we design a unified instance-aware photometric and geometric consistency loss that holistically imposes self-supervisory signals for every background and object region. Lastly, we introduce a general-purpose auto-annotation scheme using any off-the-shelf instance segmentation and optical flow models to produce video instance segmentation maps that will be utilized as input to our training pipeline. These proposed elements are validated in a detailed ablation study. Through extensive experiments conducted on the KITTI and Cityscapes dataset, our framework is shown to outperform the state-of-the-art depth and motion estimation methods. Our code, dataset, and models are publicly available.

Downloads

Published

2021-05-18

How to Cite

Lee, S., Im, S., Lin, S., & Kweon, I. S. (2021). Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 1863-1872. https://doi.org/10.1609/aaai.v35i3.16281

Issue

Section

AAAI Technical Track on Computer Vision II