Haleh AzartashKyoung-Rok LeeTruong Q. Nguyen
In this paper, we propose an accurate estimation of the camera motion in a dynamic environment from RGB-D videos. To better exclude the moving object portion of the scene from the stationary background, we use image segmentation. Next, dense pixel matching between the current and reference color images is performed to construct the 3D point cloud for dense motion estimation. At the end, we perform motion optimization, i.e., to find the combination of motion parameters that minimizes the remainder difference between the reference and the current image. We validate our proposed method across two benchmark sequences and show that our approach is more accurate than the existing solutions. We show that our method reduces the RMSE by 6.55% and 7.16% for stationary and dynamic scenes, respectively.
Joshua FabianGarrett M. Clayton
Dongsheng YangShusheng BiYueri CaiJingxiang ZhengYuan Chang
Zikang YuanKen ChengJinhui TangXin Yang
Hang XuYanning GuoZhen FengZhen Chen
Baozhen NieYingxun WangJiang ZhaoZhihao CaiChiyu Cao