Xin YeJunchen PanJichen ChenJingbo Zhang
Abstract Semantic segmentation, a task of assigning class labels to each pixel in an image, has found applications in various real‐world scenarios, including autonomous driving and scene understanding. However, its widespread use is hindered by the high computational burden. In this paper, we propose an efficient semantic segmentation method based on Feature Cascade Fusion Network (FCFNet) to address this challenge. FCFNet utilizes a dual‐path framework comprising the Spatial Information Path (SIP) and the Context Information Path (CIP). SIP is a shallow structure that captures the local dependencies of each pixel to improve the accuracy of detailed segmentation. CIP is the main branch with a deeper structure that captures sufficient contextual information from input features. Moreover, we design an Efficient Receptive Field Module (ERFM) to enlarge the receptive field in the SIP. Meanwhile, Attention Shuffled Refinement Module is used to refine feature maps from different stages. Finally, we present an Attention‐Guided Fusion Module to fuse the low‐ and high‐level feature maps effectively. Experimental results show that our proposed FCFNet achieves 70.7% mean intersection over union (mIoU) on the Cityscapes data set and 68.1% mIoU on the CamVid data set, respectively, with inference speeds of 110 and 100 frames per second (FPS), respectively. Additionally, we evaluated FCFNet on the Nvidia Jetson Xavier embedded device, which demonstrated competitive performance while significantly reducing power consumption.
Huilin ChenShengsong YangTing Lyu
Jie ZhongA. ChenYizhang JiangChengcheng SunYisheng Peng