Multi-sensor fusion has been proven to be an effective way to achieve accurate and robust pose estimation in Simultaneous Localization and Mapping (SLAM) tasks, and how sensor observations are utilized directly affects the accuracy of pose estimation. However, fusing vision and lidar modalities is challenging due to their fundamentally different characteristics. Some works extract features from vision and lidar separately, while part of the extracted features are not stable enough and cannot be observed repeatedly, which reduces the performance of the odometry. Considering that line features are easy to extract with both cameras and lidar as simple geometric primitives of the environment, this paper proposes a tightly-coupled Lidar-Visual-Inertial odometry method considering 3D-2D line correspondences based on the LVI-SAM framework. This paper improves the front-end performance of the odometry by extracting, fusing, and tracking line features in the common field of view of lidar and cameras. It first extracts single-model line features in the Visual-Inertial-Odometry(VIO) and Lidar-Inertial-Odometry(LIO) subsystems respectively, then uses lidar line feature points set to assist in the depth recovery of visual line features, and uses line segment descriptors and 3D-2D re-projection to track features across multiple frames. In the pose estimation module, the stability score of line features is used for weighting. Experiments show that our method achieves an overall 10% pose estimation accuracy improvement on both the SUPS and M2DGR datasets.
Qiliang DuBojie ChenLianfang TianLing Yuan
Daosheng LiBo SunRuyu LiuRuilei Xue
Weilai JiangFeng TuBo ChenYaonan Wang
Xiaobin XuJinchao HuLei ZhangChenfei CaoJian YangYingying RanZhiying TanLinsen XuMinzhou Luo
WANG XuanbinXingxing LiLIAO JianchiShaoquan FengShengyu LiYuxuan Zhou