Abstract In response to the limitation of traditional feature-based Visual SLAM, which focuses primarily on camera pose accuracy while neglecting the refinement of 3D map reconstruction, this paper proposes a Visual SLAM system called ON-SLAM (Simultaneous Localization and Mapping). The aim is to develop a Visual SLAM system that does not require pre-training, can quickly adapt to new and changing environments, and can generate dense point cloud maps in real time. Currently, both learning-based and non-learning-based methods fall short of these requirements due to their algorithmic limitations. The ON-SLAM framework integrates visual odometry technology with implicit neural representations to achieve real-time localization and high-precision map construction. This system is compatible with monocular cameras and relies solely on RGB inputs, demonstrating enhanced applicability in real-world scenarios. Experimental results show that ON-SLAM achieves state-of-the-art accuracy in both localization and map construction.
Jens NaumannBinbin XuStefan LeuteneggerXingxing Zuo
Nicola KrombachDavid DroeschelSebastian HoubenSven Behnke