This article presents a methodology for dehazing remote sensing images using deep learning and multi-temporal remote sensing data. The CycleGAN network has been selected as the foundation. We created a sample set of satellite images that are roughly aligned, depicting both smoggy and non-smoggy conditions, using multi-temporal remote sensing resources. In the proposed defogging algorithm, the perceptual loss of cyclic consistency is added, and the trained VGG16 network is utilized to input the original and generated images simultaneously. This allows for the features of a certain layer to be obtained and the loss to be calculated. The structure is beneficial for minimizing the reliance on paired training data, preserving image details, and enhancing clarity. The experimental results on GF1, GF2, and GF7 satellite images demonstrate that the proposed method produces high-quality, clear images. The system is designed to process remote sensing images without requiring extensive processing of complex data and tags.
Yingjie ZhangQi HuYuning WeiJiao Wang
Zhongkai ZhangWeijiang FengTiao WangYan ZhangLei Ding
Yunfei LiJixiang ChengZhidan LiQiwei PanRui ZengTian Tian
代书博 Dai Shubo徐伟 Xu Wei朴永杰 Piao Yong-jie陈彦彤 CHEN Yan-tong