Object segmentation based on multi-sensor fusion is a critical technique in autonomous vehicles, providing several benefits, including increased accuracy, robustness to adverse conditions, heightened situational awareness, and efficient processing. In this paper, we introduce a novel feature fusion-based object segmentation model named Depth-Aware Feature Pyramid Network that integrates RGB and depth information using a multi-scale feature fusion mechanism. As a result, the proposed algorithm can dynamically fuse features from multiple modalities, namely RGB and depth, to perform object segmentation with depth awareness. To validate the performance of the proposed algorithm, we conducted experiments on the Cityscapes benchmark and achieved a 72.4% mean Intersection over Union (mIOU), outperforming related object segmentation methods for autonomous vehicles.
Van Toan QuyenJong Hyuk LeeMin Young Kim
Mucong YeJingpeng OuyangGe ChenJing ZhangXiaogang Yu
Donghan M. YangQing SongChun LiuMengjie Hu
Yuqi LiYinan MaJing WuChengnian Long
Lei LüYun XiaoXiaojun ChangXuanhong WangPengzhen RenZhe Ren