Wenhui WangAnna WangQing AiChen LiuJinglu Liu
Due to the atmospheric scattering and absorption, hazy weather often occurs in our everyday life, thus reducing the visibility of scenes. Single image dehazing is considered as an ill-posed and challenging problem in computer vision. To restore visibility in inclement weather, we propose an attention-to-attention generative adversarial network (AAGAN) whose motivation is the human visual perceptual mechanism. More specifically, a dense channel attention model is embedded into the encoder. Moreover, its output is projected forward to the corresponding multiscale spatial attention model in the decoder. Both attention models form an attention-to-attention mechanism to implement attention projection, thus capturing global feature dependencies of the whole network. Besides, we analyze the dehazing mechanism based on the atmospheric scattering model, and then utilize an improved RaLSGAN to recover more realistic texture information and enhance visual contrast for different hazy scenes. Finally, in order to improve the visual performance of image restoration, we remove all the instance normalization layers to avoid unnecessary artifacts, and then introduce spectral normalization for all the convolution layers to stabilize the entire training process. Qualitative assessments and analyses demonstrate that our proposed approach can achieve remarkable dehazing performance on both synthetic and real-world scenes against previous state-of-the-art methods.
Kanghui ZhaoTao LüYu WangYuanzhi WangXin Nie