Attention-Based Multi-Modal Fusion Network for Semantic Scene Completion

Authors

  • Siqi Li Tsinghua University
  • Changqing Zou Huawei Noah's Ark Lab
  • Yipeng Li Tsinghua University
  • Xibin Zhao Tsinghua University
  • Yue Gao Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i07.6803

Abstract

This paper presents an end-to-end 3D convolutional network named attention-based multi-modal fusion network (AMFNet) for the semantic scene completion (SSC) task of inferring the occupancy and semantic labels of a volumetric 3D scene from single-view RGB-D images. Compared with previous methods which use only the semantic features extracted from RGB-D images, the proposed AMFNet learns to perform effective 3D scene completion and semantic segmentation simultaneously via leveraging the experience of inferring 2D semantic segmentation from RGB-D images as well as the reliable depth cues in spatial dimension. It is achieved by employing a multi-modal fusion architecture boosted from 2D semantic segmentation and a 3D semantic completion network empowered by residual attention blocks. We validate our method on both the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset and the results show that our method respectively achieves the gains of 2.5% and 2.6% on the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset against the state-of-the-art method.

Downloads

Published

2020-04-03

How to Cite

Li, S., Zou, C., Li, Y., Zhao, X., & Gao, Y. (2020). Attention-Based Multi-Modal Fusion Network for Semantic Scene Completion. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11402-11409. https://doi.org/10.1609/aaai.v34i07.6803

Issue

Section

AAAI Technical Track: Vision