Yudai SadakuniRyosuke KusakariKazuya OndaYoji KURODA
In this research, we propose automatic construction of highly accurate data set for depth estimation by sensor fusion with high density 3D LiDAR and stereo camera. It is difficult to assign depth information all pixels with LiDAR stationary, due to the shortness of the LiDAR'S ranging distance to measure all of the objects reflected on the camera and point cloud is not so dense enough to obtain depth information corresponding to each pixel of the RGB image. We solved these issues by integrating point cloud based on relative position calculated with high accuracy by localization. In order to show the usefulness of this research, we have conducted a running experiment at Meiji University Ikuta Campus and compared the depth image of the stereo camera with the depth image of the proposed method.
Shih-Li LuShaou-Gang MiaouShyang-En WengYing-Cheng Lin
Ki‐Hong ParkSeungryong KimKwanghoon Sohn
Guangyao XuXuewei CaoJiaxin LiuJunfeng FanEn LiXiaoyu Long
Suresh NehraJayanta LahaJayanta MukhopadhyayPrabir Kumar Biswas