SVGA-Net: Sparse Voxel-Graph Attention Network for 3D Object Detection from Point Clouds
Keywords:Computer Vision (CV), Domain(s) Of Application (APP)
AbstractAccurate 3D object detection from point clouds has become a crucial component in autonomous driving. However, the volumetric representations and the projection methods in previous works fail to establish the relationships between the local point sets. In this paper, we propose Sparse Voxel-Graph Attention Network (SVGA-Net), a novel end-to-end trainable network which mainly contains voxel-graph module and sparse-to-dense regression module to achieve comparable 3D detection tasks from raw LIDAR data. Specifically, SVGA-Net constructs the local complete graph within each divided 3D spherical voxel and global KNN graph through all voxels. The local and global graphs serve as the attention mechanism to enhance the extracted features. In addition, the novel sparse-to-dense regression module enhances the 3D box estimation accuracy through feature maps aggregation at different levels. Experiments on KITTI detection benchmark and Waymo Open dataset demonstrate the efficiency of extending the graph representation to 3D object detection and the proposed SVGA-Net can achieve decent detection accuracy.
How to Cite
He, Q., Wang, Z., Zeng, H., Zeng, Y., & Liu, Y. (2022). SVGA-Net: Sparse Voxel-Graph Attention Network for 3D Object Detection from Point Clouds. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 870-878. https://doi.org/10.1609/aaai.v36i1.19969
AAAI Technical Track on Computer Vision I