Paper ID | SS-AVV.7 | ||
Paper Title | DEPTH ESTIMATION FROM MONOCULAR IMAGES AND SPARSE RADAR USING DEEP ORDINAL REGRESSION NETWORK | ||
Authors | Chen-Chou Lo, Patrick Vandewalle, KU Leuven, Belgium | ||
Session | SS-AVV: Special Session: Autonomous Vehicle Vision | ||
Location | Area A | ||
Session Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: Autonomous Vehicle Vision | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | We integrate sparse radar data into a monocular depth estimation model and introduce a novel preprocessing method for reducing the sparseness and limited field of view provided by radar. We explore the intrinsic error of different radar modalities and show our proposed method results in more data points with reduced error. We further propose a novel method for estimating dense depth maps from monocular 2D images and sparse radar measurements using deep learning based on the deep ordinal regression network by Fu et al. Radar data are integrated by first converting the sparse 2D points to a height-extended 3D measurement and then including it into the network using a late fusion approach. Experiments are conducted on the nuScenes dataset. Our experiments demonstrate state-of-the-art performance in both day and night scenes. |