Recently, deep neural networks have led to tremendous advances in image super-resolution. As a well-known one-to-many inverse problem, the deep learning based methods tackle this issue via large receptive field. By that, the deep network could infer each output pixel from sufficient context information. However, most existing studies use larger kernel size or design a very deep network model to attain sufficient receptive field. The computational cost dramatically increments along with the training difficulty. Concerning this problem, the goal of this paper is to design an effective and trainable convolutional neural network. We proposed a multi-scale dense network (MSDN) which is composed of deep concatenation and basic blocks, namely multi-scale dense block (MSDB). The proposed MSDB use different dilated convolutions to gather multi-scale information; meanwhile concatenating the different dilated convolution results magnify the receptive field of a single layer. To facilitate the training difficulty, there are the dense skip connections in the proposed MSDB. Moreover, the deep concatenation and global skip connection are also adopted for improving training furthermore. Consequently, we achieve a large receptive field network without deeper structure. The experiments indicate that the quality of the proposed MSDN yields the state-of-the-art result.
Farong GaoYong WangZhangyi YangYuliang MaQizhong Zhang
Yooho LeeDongsan JunByung‐Gyu KimHunjoo P. Lee
Feiqiang LiuAiwen JiangBeibei WangLihui Chen
Deqiang CHENGJiamin ZhaoQiqi KOULiangliang ChenChenggong HAN