Current robots mainly perform their own positioning and construction of the environment through laser radar or visual sensors. The positioning method of the visual method is not as accurate as the laser method, but it can obtain more environmental information and complete the reconstruction of the surrounding environment. In this paper, combining the advantages of the two methods, pass the robot coordinate transformation obtained by the laser part into the visual part and then convert it to the initial value of the pose optimization in the visual tracking thread. It is more accurate than the initial value obtained by using the reference key frame and motion model. It can improve the positioning accuracy and improve the tracking loss to some extent. Finally, a more accurate environment point cloud map is constructed for subsequent work.
Wenzhong OuDaipeng FengKe LuoXu Chen
Zengyuan WangJianhua ZhangShengyong ChenConger YuanJingqian ZhangJianwei Zhang
Yuan ZhuHao AnHuaide WangRuidong XuMingzhi WuKe Lu