TANet: Robust 3D Object Detection from Point Clouds with Triple Attention

Authors

  • Zhe Liu Huazhong University of Science and Technology
  • Xin Zhao Chinese Academy of Sciences
  • Tengteng Huang Huazhong University of Science and Technology
  • Ruolan Hu Huazhong University of Science and Technology
  • Yu Zhou Huazhong University of Science and Technology
  • Xiang Bai Huazhong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v34i07.6837

Abstract

In this paper, we focus on exploring the robustness of the 3D object detection in point clouds, which has been rarely discussed in existing approaches. We observe two crucial phenomena: 1) the detection accuracy of the hard objects, e.g., Pedestrians, is unsatisfactory, 2) when adding additional noise points, the performance of existing approaches decreases rapidly. To alleviate these problems, a novel TANet is introduced in this paper, which mainly contains a Triple Attention (TA) module, and a Coarse-to-Fine Regression (CFR) module. By considering the channel-wise, point-wise and voxel-wise attention jointly, the TA module enhances the crucial information of the target while suppresses the unstable cloud points. Besides, the novel stacked TA further exploits the multi-level feature attention. In addition, the CFR module boosts the accuracy of localization without excessive computation cost. Experimental results on the validation set of KITTI dataset demonstrate that, in the challenging noisy cases, i.e., adding additional random noisy points around each object, the presented approach goes far beyond state-of-the-art approaches. Furthermore, for the 3D object detection task of the KITTI benchmark, our approach ranks the first place on Pedestrian class, by using the point clouds as the only input. The running speed is around 29 frames per second.

Downloads

Published

2020-04-03

How to Cite

Liu, Z., Zhao, X., Huang, T., Hu, R., Zhou, Y., & Bai, X. (2020). TANet: Robust 3D Object Detection from Point Clouds with Triple Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11677-11684. https://doi.org/10.1609/aaai.v34i07.6837

Issue

Section

AAAI Technical Track: Vision