Abstract

Current robots mainly perform their own positioning and construction of the environment through laser radar or visual sensors. The positioning method of the visual method is not as accurate as the laser method, but it can obtain more environmental information and complete the reconstruction of the surrounding environment. In this paper, combining the advantages of the two methods, pass the robot coordinate transformation obtained by the laser part into the visual part and then convert it to the initial value of the pose optimization in the visual tracking thread. It is more accurate than the initial value obtained by using the reference key frame and motion model. It can improve the positioning accuracy and improve the tracking loss to some extent. Finally, a more accurate environment point cloud map is constructed for subsequent work.

Keywords:
Computer vision Computer science Artificial intelligence Point cloud Simultaneous localization and mapping Coordinate system Robot Thread (computing) Laser Frame (networking) Mobile robot

Metrics

3
Cited By
0.95
FWCI (Field Weighted Citation Impact)
11
Refs
0.84
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
3D Surveying and Cultural Heritage
Physical Sciences →  Earth and Planetary Sciences →  Geology
Robotic Path Planning Algorithms
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.