Abstract

Monocular Simultaneous Localization and Mapping (SLAM) approaches have progressed significantly over the last two decades. However, keypoint-based approaches only provide limited structural information in a 3D point cloud which does not fulfil the requirements of applications such as Augmented Reality (AR). SLAM systems that provide dense environment maps are either computationally intensive or require depth information from additional sensors. In this paper, we use a deep neural network that estimates planar regions from RGB input images and fuses its output iteratively with the point cloud map of a SLAM system to create an efficient monocular planar SLAM system. We present qualitative results of the created maps, as well as an evaluation of the tracking accuracy and runtime of our approach.

Keywords:
Simultaneous localization and mapping Point cloud Artificial intelligence Computer vision Computer science Monocular Planar RGB color model Tracking (education) Point (geometry) Augmented reality Trajectory Computer graphics (images) Robot Mobile robot Mathematics

Metrics

9
Cited By
2.54
FWCI (Field Weighted Citation Impact)
20
Refs
0.92
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
3D Surveying and Cultural Heritage
Physical Sciences →  Earth and Planetary Sciences →  Geology
Augmented Reality Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Localisation accuracy of semi-dense monocular SLAM

Kristiaan SchrevePieter G. Du PlessiesMatthias Rätsch

Journal:   Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE Year: 2017 Vol: 10332 Pages: 103320H-103320H
© 2026 ScienceGate Book Chapters — All rights reserved.