JOURNAL ARTICLE

Depth-Aware Feature Pyramid Network for Semantic Segmentation

Abstract

Object segmentation based on multi-sensor fusion is a critical technique in autonomous vehicles, providing several benefits, including increased accuracy, robustness to adverse conditions, heightened situational awareness, and efficient processing. In this paper, we introduce a novel feature fusion-based object segmentation model named Depth-Aware Feature Pyramid Network that integrates RGB and depth information using a multi-scale feature fusion mechanism. As a result, the proposed algorithm can dynamically fuse features from multiple modalities, namely RGB and depth, to perform object segmentation with depth awareness. To validate the performance of the proposed algorithm, we conducted experiments on the Cityscapes benchmark and achieved a 72.4% mean Intersection over Union (mIOU), outperforming related object segmentation methods for autonomous vehicles.

Keywords:
Artificial intelligence Computer science Robustness (evolution) Computer vision Segmentation Pyramid (geometry) Feature (linguistics) Image segmentation Segmentation-based object categorization RGB color model Fusion mechanism Object detection Pattern recognition (psychology) Benchmark (surveying) Feature extraction Scale-space segmentation Fusion Mathematics

Metrics

2
Cited By
0.36
FWCI (Field Weighted Citation Impact)
15
Refs
0.53
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.