Urban image semantic segmentation faces challenges including the coexistence of multi-scale objects, blurred semantic relationships between complex structures, and dynamic occlusion interference. Existing methods often struggle to balance global contextual understanding of large scenes and fine-grained details of small objects due to insufficient granularity in multi-scale feature extraction and rigid fusion strategies. To address these issues, this paper proposes an Adaptive Multi-scale Feature Fusion Network (AMFFNet). The network primarily consists of four modules: a Multi-scale Feature Extraction Module (MFEM), an Adaptive Fusion Module (AFM), an Efficient Channel Attention (ECA) module, and an auxiliary supervision head. Firstly, the MFEM utilizes multiple depthwise strip convolutions to capture features at various scales, effectively leveraging contextual information. Then, the AFM employs a dynamic weight assignment strategy to harmonize multi-level features, enhancing the network’s ability to model complex urban scene structures. Additionally, the ECA attention mechanism introduces cross-channel interactions and nonlinear transformations to mitigate the issue of small-object segmentation omissions. Finally, the auxiliary supervision head enables shallow features to directly affect the final segmentation results. Experimental evaluations on the CamVid and Cityscapes datasets demonstrate that the proposed network achieves superior mean Intersection over Union (mIoU) scores of 77.8% and 81.9%, respectively, outperforming existing methods. The results confirm that AMFFNet has a stronger ability to understand complex urban scenes.
Shusheng LiLiang WanLu TangZhining Zhang
Ronghua ShangJiyu ZhangLicheng JiaoYangyang LiNaresh MarturiRustam Stolkin
Feiting WangYuan ZhangQiongqiong HuYu Zhu
Lingfeng ShenYanlong CaoWenbin ZhuKai RenYejun ShouHaocheng WangZhijie Xu