Due to the difficulty of obtaining paired data sets from the real world to train the network, most of the current dehazing networks are trained by synthetic hazy data sets, which will have drawbacks such as poor generalization ability to natural haze scenes and loss of depth details. This paper proposes an image dehazing method using CycleGAN based on improved feature fusion to solve the problem. The method is designed with an encoder-decoder structure in the generator network, enabling more feature information to be extracted at multiple scales. In order to restore the detailed information of the image, this paper introduces the residual dense block instead of the convolution module to extract and fuse the feature information under different receptive fields in each stage of the network. Aiming at the complexity of the fog distribution in the actual scene, this paper introduces an improved channel and spatial attention mechanism in the skip connection of the network to accomplish non-uniform processing of haze areas with different concentrations. At the same time, to improve the quality of the generated image, this paper introduces perceptual loss to enhance the detailed information of the output features, making the generated image more realistic. The experimental findings suggest that the proposed method can achieve better subjective visual effects and image details, and the outcomes of objective indicators are also improved.
Jingpin WangYuan GeJie ZhaoChao HanJie ZhaoChao HanChao Han
R. S. JaisuryaSnehasis Mukherjee
Runjie YangYongde GuoZheng GongwangHaonan YangYixuan Zhang
Sheping ZhaiYuanbiao LiuDabao Cheng