Forecasting hazardous weather, especially severe convective weather, is critical to protecting the environment and human life. Radar image extrapolation is a widely adopted method for this purpose, including the traditional optical flow-based method and deep learning-based methods. However, the optical flow approach suffers from a significant decline in accuracy with time, and the predictions generated by ConvRNN and CNN-based models tend to be smooth and blurry. To solve these limitations, we propose a Radar Multi-scale Fusion Generative Prediction Network (Fugen-Net) for the task of radar image extrapolation. We use an enhanced U-Net as the generator of a conditional generative adversarial network (CGAN) and design a Multi-scale Feature Fusion and Transmission (MFT) structure to improve the feature extraction capacity and reduce the number of parameters. To evaluate the forecasting ability of Fugen-Net, we conduct experiments using 4 years of radar images collected in Beijing from 2014 to 2017 and compare its performance with optical flow and U-Net. The experimental results demonstrate that Fugen-Net outperforms the other two methods when both high threshold skill scores and image realism are considered.
Shengchao ChenTing ShuHuan ZhaoQilin WanJincan HuangCailing Li
Cong WangPing WangPingping WangBing XueDi Wang
Yujie LiuChanghong DouQilu ZhaoZongmin LiHua Li
Changyou ShiJianping LuQiang SunJing ZhouRongze XiaWei Huang