PointINet: Point Cloud Frame Interpolation Network
Keywords:3D Computer Vision, Vision for Robotics & Autonomous Driving
AbstractLiDAR point cloud streams are usually sparse in time dimension, which is limited by hardware performance. Generally, the frame rates of mechanical LiDAR sensors are 10 to 20 Hz, which is much lower than other commonly used sensors like cameras. To overcome the temporal limitations of LiDAR sensors, a novel task named Point Cloud Frame Interpolation is studied in this paper. Given two consecutive point cloud frames, Point Cloud Frame Interpolation aims to generate intermediate frame(s) between them. To achieve that, we propose a novel framework, namely Point Cloud Frame Interpolation Network (PointINet). Based on the proposed method, the low frame rate point cloud streams can be upsampled to higher frame rates. We start by estimating bi-directional 3D scene flow between the two point clouds and then warp them to the given time step based on the 3D scene flow. To fuse the two warped frames and generate intermediate point cloud(s), we propose a novel learning-based points fusion module, which simultaneously takes two warped point clouds into consideration. We design both quantitative and qualitative experiments to evaluate the performance of the point cloud frame interpolation method and extensive experiments on two large scale outdoor LiDAR datasets demonstrate the effectiveness of the proposed PointINet. Our code is available at https://github.com/ispc-lab/PointINet.git.
How to Cite
Lu, F., Chen, G., Qu, S., Li, Z., Liu, Y., & Knoll, A. (2021). PointINet: Point Cloud Frame Interpolation Network. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2251-2259. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16324
AAAI Technical Track on Computer Vision II