Taian GuoTao DaiЛинг ЛиуZexuan ZhuShu‐Tao Xia
Convolutional Neural Networks (CNNs) have been widely used in video super-resolution (VSR). Most existing VSR methods focus on how to utilize the information of multiple frames, while neglecting the feature correlations of the intermediate features, thus limiting the feature expression of the models. To address this problem, we propose a novel SAA network, that is, Scale-and-Attention-Aware Networks, to apply different attention to different temporal-length streams, while further exploring both spatial and channel attention on separate streams with a newly proposed Criss-Cross Channel Attention Module (C3AM). Experiments on public VSR datasets demonstrate the superiority of our method over other state-of-the-art methods in terms of both quantitative and qualitative metrics.
Wei YuZonglin LiQinglin LiuFeng JiangChangyong GuoShengping Zhang
Young-Ju ChoiYoungwoon LeeByung‐Gyu Kim
Zhenghua ZhouBoxiang XueHai WangJianwei Zhao
Yingwei WangTakashi IsobeXu JiaXin TaoHuchuan LuYu‐Wing Tai
Laigan LuoBenshun YiZhongyuan WangZheng HeChao Zhu