Lanqing ZhangZhigao CuiYanzhao SuNian WangYunwei LanLiangyu ZhuCheng Chen
ABSTRACT To address the issue of recovery imbalance caused by the spatial heterogeneity of haze concentration in real scenarios, this paper proposes an adaptive feature enhanced contrastive learning framework (AFE‐Dehaze). The framework achieves breakthroughs through three major collaborative mechanisms: (1) a hierarchical multi‐scale fusion architecture that combines diffusion convolution and channel attention, preserving edge textures (such as leaf veins and building contours) in thin haze areas, while semantically guiding the reconstruction of structural details in dense haze areas, improving texture retention by 18% compared to traditional U‐Net; (2) a concentration‐sensitive contrastive learning paradigm that uses pre‐trained VGG features as semantic anchors, applying pixel‐level constraints in thin haze and feature space constraints in dense haze, which reduces color distortion () by 23%, significantly outperforming methods like refusion; (3) a gradient dynamic balancing strategy that automatically adjusts the optimization direction by analyzing positive and negative sample gradient contributions, enhancing PSNR by 1.2dB and SSIM by 0.05 in non‐uniform haze scenarios. Experiments on a mixed dataset (RESIDE OTS real scenes) demonstrate that AFE‐Dehaze achieves an average PSNR of 28.7dB and SSIM of 0.91, especially improving structural similarity in dense haze areas by 9% compared to Mamba, validating its generalization capability in complex haze environments. This framework provides a solution that balances accuracy and robustness for dehazing in real scenarios such as vehicular vision and remote sensing imaging.
Yongzhen WangXuefeng YanFu Lee WangHaoran XieWenhan YangXiao–Ping ZhangJing QinMingqiang Wei
Divine Joseph AppiahDonghai GuanAbdul Nasser KasuleMingqiang Wei
Qianwen HouShilong WangJianlei Liu
Yang LiuZhaoyang FanFei WangYulong WangDajun Du
Guifang ShaoTao WeiQingyuan ZhuYunlong GaoMinyu ChengWusi Wen