MonoGRNet: A Geometric Reasoning Network for Monocular 3D Object Localization

Authors

  • Zengyi Qin Tsinghua University
  • Jinglu Wang Microsoft Research Asia
  • Yan Lu Microsoft Research Asia

DOI:

https://doi.org/10.1609/aaai.v33i01.33018851

Abstract

Localizing objects in the real 3D space, which plays a crucial role in scene understanding, is particularly challenging given only a single RGB image due to the geometric information loss during imagery projection. We propose MonoGRNet for the amodal 3D object localization from a monocular RGB image via geometric reasoning in both the observed 2D projection and the unobserved depth dimension. MonoGRNet is a single, unified network composed of four task-specific subnetworks, responsible for 2D object detection, instance depth estimation (IDE), 3D localization and local corner regression. Unlike the pixel-level depth estimation that needs per-pixel annotations, we propose a novel IDE method that directly predicts the depth of the targeting 3D bounding box’s center using sparse supervision. The 3D localization is further achieved by estimating the position in the horizontal and vertical dimensions. Finally, MonoGRNet is jointly learned by optimizing the locations and poses of the 3D bounding boxes in the global context. We demonstrate that MonoGRNet achieves state-of-the-art performance on challenging datasets.

Downloads

Published

2019-07-17

How to Cite

Qin, Z., Wang, J., & Lu, Y. (2019). MonoGRNet: A Geometric Reasoning Network for Monocular 3D Object Localization. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8851-8858. https://doi.org/10.1609/aaai.v33i01.33018851

Issue

Section

AAAI Technical Track: Vision