Robust real-world super-resolution (SR) aims to generate perception-oriented high-resolution (HR) images from the corresponding low-resolution (LR) ones, without access to the paired LR-HR ground-truth. In this paper, we investigate how to advance the state of the art in real-world SR. Our method involves deploying an ensemble of generative adversarial networks (GANs) for robust real-world SR. The ensemble deploys different GANs trained with different adversarial objectives. Due to the lack of knowledge about the ground-truth blur and noise models, we design a generic training set with the LR images generated by various degradation models from a set of HR images. We achieve good perceptual quality by super resolving the LR images whose degradation was caused by unknown image processing artifacts. For real-world SR on images captured by mobile devices, the GANs are trained by weak supervision of a mobile SR training set having LR-HR image pairs, which we construct from the DPED dataset which provides registered mobile-DSLR images at the same scale. Our ensemble of GANs uses cues from the image luminance and adjusts to generate better HR images at low-illumination. Experiments on the NTIRE 2020 real-world super-resolution dataset show that our proposed SR approach achieves good perceptual quality.
Hao HouJun XuYingkun HouXiaotao HuBenzheng WeiDinggang Shen
Rao Muhammad UmerGian Luca ForestiChristian Micheloni
Xiaoyan HuXiangjun LiuZechen WangXinran LiWenqiang PengGuang Cheng
Xiaoyan HuZechen WangXiangjun LiuXinran LiGuang ChengJian Gong
Guang ChengJian GongZechen WangXiangjun LiuXinran LiXiaoyan Hu