SEFormer: Structure Embedding Transformer for 3D Object Detection

Authors

  • Xiaoyu Feng Tsinghua University
  • Heming Du Australian National University
  • Hehe Fan National University of Singapore
  • Yueqi Duan Tsinghua University
  • Yongpan Liu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v37i1.25139

Keywords:

CV: 3D Computer Vision, CV: Object Detection & Categorization

Abstract

Effectively preserving and encoding structure features from objects in irregular and sparse LiDAR points is a crucial challenge to 3D object detection on the point cloud. Recently, Transformer has demonstrated promising performance on many 2D and even 3D vision tasks. Compared with the fixed and rigid convolution kernels, the self-attention mechanism in Transformer can adaptively exclude the unrelated or noisy points and is thus suitable for preserving the local spatial structure in the irregular LiDAR point cloud. However, Transformer only performs a simple sum on the point features, based on the self-attention mechanism, and all the points share the same transformation for value. A such isotropic operation cannot capture the direction-distance-oriented local structure, which is essential for 3D object detection. In this work, we propose a Structure-Embedding transFormer (SEFormer), which can not only preserve the local structure as a traditional Transformer but also have the ability to encode the local structure. Compared to the self-attention mechanism in traditional Transformer, SEFormer learns different feature transformations for value points based on the relative directions and distances to the query point. Then we propose a SEFormer-based network for high-performance 3D object detection. Extensive experiments show that the proposed architecture can achieve SOTA results on the Waymo Open Dataset, one of the most significant 3D detection benchmarks for autonomous driving. Specifically, SEFormer achieves 79.02% mAP, which is 1.2% higher than existing works. https://github.com/tdzdog/SEFormer.

Downloads

Published

2023-06-26

How to Cite

Feng, X., Du, H., Fan, H., Duan, Y., & Liu, Y. (2023). SEFormer: Structure Embedding Transformer for 3D Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 632-640. https://doi.org/10.1609/aaai.v37i1.25139

Issue

Section

AAAI Technical Track on Computer Vision I