<p>Recent research on single-image super-resolution (SR) shows that deep learning-based methods outperform stateof-the-art techniques but at the cost of increased memory consumption and computational complexity. This results in longer training and inference times and higher GPU memory requirements compared to traditional approaches. Network architecture modifications can impact performance, complexity, and memory needs. This paper explores enhancing SR performance using a simple yet efficient deep-learning SR model, focusing on local and global connections in residual networks, channel attention mechanisms, and up-sampling techniques. Our efficient, lightweight, locally dense residual SR architecture achieves performance comparable to state-of-the-art models, reducing spatial complexity by up to 1/6 and inference time by half compared to the baseline.<br>This work has been partly supported by the project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 (RISE – Call: H2020-WIDESPREAD-01-2016-2017-TeamingPhase2) and the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy.</p>
Vishal ChudasamaKalpesh PrajapatiKishor Upla
Shashwat PandeyDarshika SharmaBasant KumarHimanshu Singh
Thuong Le-TienTuan Nguyen-ThanhHanh-Phan XuanGiang Nguyen-TruongVinh Ta-Quoc
Basant KumarDarshika SharmaShashwat PandeyHimanshu Singh