Jing YeChangzhen QiuZhiyong Zhang
Recently, deep learning has been widely employed to improve the quality of low-light videos. However, most existing low-light video enhancement methods fail to effectively explore temporal dependence, and the enhanced videos may suffer from severe noise, loss of detailed texture, and temporal inconsistency. In this paper, we propose a novel SNR-prior Guided Trajectory-aware Transformer (SGTT) to enable effective video representation learning for low-light video enhancement. Specifically, signal-to-noise ratio prior and cosine similarity are introduced to build the trajectory-aware dual-attention for exploiting the dependence of long-range spatio-temporal information, which searches for sharper and highly correlated patches within the same trajectory to assist in enhancing the target frames. Moreover, to adaptively fuse spatio-temporal information of support frames propagated bidirectionally, an attention-guided spatio-temporal feature aggregation module is proposed to perceive and enhance the specific high-quality features. The evaluation of both dynamic and static videos shows the effectiveness of our network, which significantly outperforms the state-of-the-art methods.
Zhijian LuoJiahui TangKaihua ZhouZihan HuangJiao ZhangYueen Hou
Xiaogang XuRuixing WangChi‐Wing FuJiaya Jia
Wei DongMin YanHan ZhouJun Chen
Jin ZhangHaiyan JinHaonan SuYuanlin ZhangZhaolin XiaoBin Wang