JOURNAL ARTICLE

Feature fusion attention network for underwater image enhancement

Abstract

The attenuation, scattering, and absorption of light in water make underwater imaging a challenging task, and global contextual information is key to enhancing underwater images. However, most existing algorithms rely on simple convolutional neural networks (CNNs), which inherently have locality due to convolutional operations, are difficult to directly obtain global contextual information. The problem is addressed by embedding Swin-Transformer, which has global modeling capability, into U-Net. Specifically, a feature fusion module is constructed to change the way feature propagation occurs between network layers and to fuse context feature information between different network layers. The proposed method is evaluated on the constructed dataset. Experimental results show that the proposed method outperforms existing methods in objective metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), and significantly improved the quality and visibility of underwater images.

Keywords:
Computer science Artificial intelligence Underwater Pattern recognition (psychology) Feature (linguistics) Convolutional neural network Fuse (electrical) Feature extraction Image quality Context (archaeology) Data mining Computer vision Image (mathematics) Engineering

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
23
Refs
0.11
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Image Enhancement Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image Fusion Techniques
Physical Sciences →  Engineering →  Media Technology
Image and Signal Denoising Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.