3D Box Proposals From a Single Monocular Image of an Indoor Scene

Authors

  • Wei Zhuo Australian National University; Data61, CSIRO
  • Mathieu Salzmann Ecole Polytechnique Fédérale de Lausanne (EPFL)
  • Xuming He ShanghaiTech University
  • Miaomiao Liu Data61, CSIRO; Australian National University

DOI:

https://doi.org/10.1609/aaai.v32i1.12314

Keywords:

Indoor Scene Understanding, 3D Box Proposal, Deep Learning

Abstract

Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depth-based 3D proposal generation.

Downloads

Published

2018-04-27

How to Cite

Zhuo, W., Salzmann, M., He, X., & Liu, M. (2018). 3D Box Proposals From a Single Monocular Image of an Indoor Scene. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12314