A video super-resolution reconstruction algorithm with multi-level attention is proposed to fully utilize the abundant inter-frame redundancy information and balance the trade-off between reconstruction performance and speed. The algorithm achieves adaptive residual space fusion and multi-level inter-frame information fusion through a recursive adaptive aggregation network and adaptive multi-level attention modules. Specifically, each adaptive multi-level attention module is used for multi-level inter-frame information fusion. Then, multiple cascaded adaptive multi-level attention modules with shared weights are used for adaptive residual space fusion. Finally, the feature is refined and enlarged through a reconstruction network to obtain the final high-resolution video frame. This algorithm can better restore high-frequency details such as textures and edges, while enhancing the long-term temporal dependency modeling capability and achieving a balance between reconstruction performance and speed. Experimental results show that the proposed algorithm can effectively improve the video super-resolution reconstruction performance on standard datasets.
Takashi IsobeSongjiang LiXu JiaShanxin YuanGreg SlabaughChunjing XuYali LiShengjin WangQi Tian
Marco Di RienzoMatteo BruniLeonardo GalteriFederico BecattiniMarco Bertini
Zhenghua ZhouBoxiang XueHai WangJianwei Zhao
Wei SunXianguang KongYanning Zhang