JOURNAL ARTICLE

Attention Guided Unsupervised learning of Monocular Visual-inertial Odometry

Abstract

Visual-inertial Odometry (VIO) provides cars with position information by fusing data from a camera and inertial measurement unit (IMU) which are both widely equipped on intelligent vehicles. Recently, unsupervised VIO has made great progress. However, existing VIOs mainly concatenate features extracted from different domains (visual and inertial), leading to inconsistency during integration. These methods are also difficult to scale to longer sequences because absolute velocity is not available. Hence, we propose a novel network based on attention mechanism to fuse sensors in a self-motivated and meaningful manner. We design spatial and temporal branches that focus on pairwise images and a sequence of images respectively. Meanwhile, a tiny but effective module (referred to as "warm start") is introduced to produce velocity-related information for the IMU encoder. The proposed attention branches and warm start are shown to improve the robustness of the model in dynamic scenarios and in the case of rapid changes in vehicle velocity. Evaluation on KITTI and Malaga datasets shows that our method outperforms other recent state-of-the-art VO/VIO methods.

Keywords:
Odometry Inertial measurement unit Artificial intelligence Visual odometry Computer science Robustness (evolution) Computer vision Fuse (electrical) Monocular Deep learning Encoder Inertial frame of reference Pairwise comparison Visualization Robot Engineering Mobile robot

Metrics

6
Cited By
1.94
FWCI (Field Weighted Citation Impact)
32
Refs
0.86
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.