Re-Thinking LiDAR-Stereo Fusion Frameworks (Student Abstract)

Authors

  • Qilin Jin University of North Carolina
  • Parasara Sridhar Duggirala University of North Carolina

DOI:

https://doi.org/10.1609/aaai.v34i10.7185

Abstract

In this paper, we present a 2-step framework for high-precision dense depth perception from stereo RGB images and sparse LiDAR input. In the first step, we train a deep neural network to predict dense depth map from the left image and sparse LiDAR data, in a novel self-supervised manner. Then in the second step, we compute a disparity map from the predicted depths, and refining the disparity map by making sure that for every pixel in the left, its match in the right image, according to the final disparity, is the local optimum.

Downloads

Published

2020-04-03

How to Cite

Jin, Q., & Duggirala, P. S. (2020). Re-Thinking LiDAR-Stereo Fusion Frameworks (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13827-13828. https://doi.org/10.1609/aaai.v34i10.7185

Issue

Section

Student Abstract Track