Single image super-resolution (SISR) is an important procedure to improve many remote sensing applications. Global features play an important role in pixel generation of SISR. In this paper, we proposed a self-attention fusion module named as SAF module which combines spatial attention and channel attention in parallel to handle this problem. Our self-attention fusion module can be flexibly added to many popular deep-learning-based SISR models to further improve their representation ability and learn global features. Experiments on UC Merced dataset indicate that SAF module can improve the performance of classic SISR models and achieve state-of-the-art super-resolution results.
Wenzong JiangLifei ZhaoYanjiang WangWeifeng LiuBaodi Liu
Ziang LiWen LuZhaoyang WangJian HuZeming ZhangLihuo He
Wangyou ChenShenming QuLaigan LuoYongyong Lu