PolarFormer: Multi-Camera 3D Object Detection with Polar Transformer

Authors

  • Yanqin Jiang NLPR, Institute of Automation, Chinese Academy of Sciences, School of Artificial Intelligence, University of Chinese Academy of Sciences
  • Li Zhang School of Data Science, Fudan University, 3School of Computer Science, Fudan University
  • Zhenwei Miao Alibaba DAMO Academy
  • Xiatian Zhu Surrey Institute for People-Centred Artificial Intelligence, CVSSP, University of Surrey
  • Jin Gao NLPR, Institute of Automation, Chinese Academy of Sciences, School of Artificial Intelligence, University of Chinese Academy of Sciences
  • Weiming Hu NLPR, Institute of Automation, Chinese Academy of Sciences, School of Artificial Intelligence, University of Chinese Academy of Sciences, School of Information Science and Technology, ShanghaiTech University
  • Yu-Gang Jiang School of Computer Science, Fudan University

DOI:

https://doi.org/10.1609/aaai.v37i1.25185

Keywords:

CV: Object Detection & Categorization, CV: Vision for Robotics & Autonomous Driving, CV: 3D Computer Vision

Abstract

3D object detection in autonomous driving aims to reason “what” and “where” the objects of interest present in a 3D world. Following the conventional wisdom of previous 2D object detection, existing methods often adopt the canonical Cartesian coordinate system with perpendicular axis. However, we conjugate that this does not fit the nature of the ego car’s perspective, as each onboard camera perceives the world in shape of wedge intrinsic to the imaging geometry with radical (non perpendicular) axis. Hence, in this paper we advocate the exploitation of the Polar coordinate system and propose a new Polar Transformer (PolarFormer) for more accurate 3D object detection in the bird’s-eye-view (BEV) taking as input only multi-camera 2D images. Specifically, we design a cross-attention based Polar detection head without restriction to the shape of input structure to deal with irregular Polar grids. For tackling the unconstrained object scale variations along Polar’s distance dimension, we further introduce a multi-scale Polar representation learning strategy. As a result, our model can make best use of the Polar representation rasterized via attending to the corresponding image observation in a sequence-to-sequence fashion subject to the geometric constraints. Thorough experiments on the nuScenes dataset demonstrate that our PolarFormer outperforms significantly state-of-the-art 3D object detection alternatives.

Downloads

Published

2023-06-26

How to Cite

Jiang, Y., Zhang, L., Miao, Z., Zhu, X., Gao, J., Hu, W., & Jiang, Y.-G. (2023). PolarFormer: Multi-Camera 3D Object Detection with Polar Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 1042-1050. https://doi.org/10.1609/aaai.v37i1.25185

Issue

Section

AAAI Technical Track on Computer Vision I