JOURNAL ARTICLE

Multi-Frame Self-Supervised Depth Estimation with Multi-Scale Feature Fusion in Dynamic Scenes

Abstract

Monocular depth estimation is a fundamental task in computer vision and multimedia. The self-supervised learning pipeline makes it possible to train the monocular depth network with no need of depth labels. In this paper, a multi-frame depth model with multi-scale feature fusion is proposed for strengthening texture features and spatial-temporal features, which improves the robustness of depth estimation between frames with large camera ego-motion. A novel dynamic object detecting method with geometry explainability is proposed. The detected dynamic objects are excluded during training, which guarantees the static environment assumption and relieves the accuracy degradation problem of the multi-frame depth estimation. Robust knowledge distillation with a consistent teacher network and reliability guarantee is proposed, which improves the multi-frame depth estimation without an increase in computation complexity during the test. The experiments show that our proposed methods achieve great performance improvement on the multi-frame depth estimation.

Keywords:
Computer science Artificial intelligence Robustness (evolution) Frame (networking) Computer vision Monocular Fusion mechanism Motion estimation Pattern recognition (psychology) Fusion

Metrics

5
Cited By
0.91
FWCI (Field Weighted Citation Impact)
37
Refs
0.71
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Optical measurement and interference techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.