Xueyan DingXiyu ChenYongxing SuiYafei WangJianxin Zhang
Due to the distinctive attributes of underwater environments, underwater images frequently encounter challenges such as low contrast, color distortion, and noise. Current underwater image enhancement techniques often suffer from limited generalization, preventing them from effectively adapting to a variety of underwater images taken in different underwater environments. To address these issues, we introduce a diffusion model-based underwater image enhancement method using an adversarial learning strategy, referred to as adversarial learning diffusion underwater image enhancement (ALDiff-UIE). The generator systematically eliminates noise through a diffusion model, progressively aligning the distribution of the degraded underwater image with that of a clear underwater image, while the discriminator helps the generator produce clear, high-quality underwater images by identifying discrepancies and pushing the generator to refine its outputs. Moreover, we propose a multi-scale dynamic-windowed attention mechanism to effectively fuse global and local features, optimizing the process of capturing and integrating information. Qualitative and quantitative experiments on four benchmark datasets—UIEB, U45, SUIM, and LSUI—demonstrate that ALDiff-UIE increases the average PCQI by approximately 12.8% and UIQM by about 15.6%. The results indicate that our method outperforms several mainstream approaches in terms of both visual quality and quantitative metrics, showcasing its effectiveness in enhancing underwater images.
Meghna KapoorR. K. BaghelBadri Narayan SubudhiVinit JakhetiyaAnkur Bansal
Nisha Singh GaurMukesh D. PatilGajanan K. Birajdar
M. BharathiA. AmsaveniS DharaniS AkilandeswariM. Sriram
Mengyi YanZetao JiangYuanbo RenJiayi LiSihang DangXiaoyi FengZhaoqiang Xia