Zikang YuanKen ChengJinhui TangXin Yang
Visual odometry (VO) is a fundamental technique for many robotics and augmented reality (AR) applications. However, most existing RGB-D VO systems suffer from large performance degradation when large occlusions are present and/or a large portion of depth values are invalid due to the limited range of an RGB-D camera, prohibiting the usage of most systems in practical applications. To address above two problems, we present RGB-D DSO, an RGB-D direct sparse odometry with the core part being sliding-window optimization with occlusion removal and a depth refinement module. Occlusion removal excludes negative effects arising from occluded objects when minimizing the final energy function for camera pose tracking. Depth refinement ensures sufficient valid depth values uniformly distributed for the depth map of a keyframe. Experimental results on three public datasets demonstrate that our method achieves smaller tracking error than most existing state-of-the-art methods. Meanwhile, our system takes only 21.93 ms to track a frame, which is faster than most existing methods.
Haleh AzartashKyoung-Rok LeeTruong Q. Nguyen
Joshua FabianGarrett M. Clayton
Christian KerlJürgen SturmDaniel Cremers
Bruno Marques Ferreira da SilvaLuiz Marcos Garcia Gonçalves