Sadaf FarkhaniMikkel Fly KraghPeter ChristiansenRasmus Nyholm JørgensenHenrik Karstoft
Autonomous driving in agriculture can be eased and be more safe if guided by dense depth maps, since dense depth maps outlines scene geometry. RGB monocular image has only naive information about depth and although LiDAR has accurate depth information, it can only provide sparse depth maps. By interpolating sparse LiDAR with aligned color image, reliable dense depth maps can be created. In this paper, we apply a deep regression model where an RGB monocular image was used for a sparse-to-dense LiDAR depth map completion. Our model is based on U-Net architecture presented in [9]. Training the model on the Fieldsafe dataset which is a multi-modal agricultural dataset, however, leads to overfitting. Therefore, we trained the model on the Kitti dataset with high image diversity and test it on the Fieldsafe. We produced an error map to analyze performance of the model for close or far distant objects in the Fieldsafe dataset. The error maps show the absolute difference between the depth ground truth and the predicted depth value. The model preforms 63.6% better on close distance objects than far objects in Fieldsafe. However, the model performs 10.96% better on far objects than close objects in the Kitti dataset.
Shuwei ShaoZhongcai PeiWeihai ChenZhong LiuZhengguo Li
Xin XiongHaipeng XiongKe XianChen ZhaoZhiguo CaoXin Li
Huadong LiMinhao JingJin WangShichao DongJiajun LiangHaoqiang FanRenhe Ji
Maximilian JaritzRaoul de CharetteÉmilie WirbelXavier PerrottonFawzi Nashashibi