JOURNAL ARTICLE

D3VIL-SLAM: 3D Visual Inertial LiDAR SLAM for Outdoor Environments

Abstract

Autonomous driving and 3D mapping are a few applications associated with real-time six-degrees-of-freedom pose estimation of ground vehicles, especially in outdoor (e.g., urban) environments. During the past decades, many systems have been proposed, with the majority working on data coming from only one sensor, while also struggling to keep accuracy and performance balanced. In this paper, we present D3VIL-SLAM, which extends an existing LiDAR-based SLAM system, ART-SLAM, to include inertial and visual information. The front-end comprises three branches that perform short-term data association, i.e., tracking, by exploiting laser, visual, and inertial data, respectively. All motion estimates and loop constraints derived from both LiDAR scans and images are used to build a robust g2o pose graph, which is later optimized to best satisfy all motion constraints. We compare the accuracy of our system with state-of-the-art SLAM methods, showing that D3VIL-SLAM is more accurate and produces highly detailed 3D maps while retaining real-time performance. Lastly, we perform a brief ablation study with different limitations (e.g., only images are allowed). All experimental campaigns are done by evaluating the estimated trajectory displacement using the KITTI dataset.

Keywords:
Simultaneous localization and mapping Computer vision Computer science Artificial intelligence Lidar Trajectory Visualization Inertial measurement unit Ground truth Robot Geography Remote sensing Mobile robot

Metrics

4
Cited By
2.08
FWCI (Field Weighted Citation Impact)
27
Refs
0.88
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
Robotic Path Planning Algorithms
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.