JOURNAL ARTICLE

Video Super-Resolution with Pyramid Flow-Guided Deformable Alignment Network

Abstract

Video super-resolution (VSR) aims to recover high-resolution frames from their multiple low-resolution counterparts. How to fully exploit the spatio-temporal information among video sequences for VSR is significant but challenging. The challenges can be summarized as: 1) How to perform frame alignment when encountering displacement and occlusion. 2) How to effectively utilize spatio-temporal information for performance improvement. To address the above core problems, this paper proposes a pyramid flow-guided deformable alignment network for VSR (PFDVR) to achieve precision frame alignment and efficient spatiotemporal feature exploitation. Specifically, a pyramid flow-guided deformable alignment module (PFGDA) is proposed to perform feature alignment in a coarse-to-fine manner, and we employ bidirectional recurrent feature propagation to excavate temporal information. To capture the long-term spatio-temporal dependency, we propose an omniscient progressive fusion module (OPF) to achieve multi-level feature fusion both in spatial and temporal dimensions. The experimental results have demonstrated that our PFDVR can achieve promising SR performance.

Keywords:
Pyramid (geometry) Computer science Feature (linguistics) Artificial intelligence Frame (networking) Computer vision Optical flow Dependency (UML) Image resolution Feature extraction Temporal resolution Pattern recognition (psychology) Image (mathematics)

Metrics

3
Cited By
0.55
FWCI (Field Weighted Citation Impact)
31
Refs
0.62
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Image Processing Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image Processing Techniques and Applications
Physical Sciences →  Engineering →  Media Technology
© 2026 ScienceGate Book Chapters — All rights reserved.