There are still challenges in the field of video understanding today, especially how to use natural language to describe the visual content in videos. Existing video encoder-decoder models struggle to extract deep semantic information and effectively understand the complex contextual semantics in a video sequence. Furthermore, different visual elements in the video contribute differently to the generation of video text descriptions. In this paper, we propose a video description method that fuses instance-aware temporal features. We extract local features of instances on the temporal sequence to enhance perception of temporal instances. We also employ spatial attention to perform weighted fusion of temporal features. Finally, we use bidirectional long short-term memory networks to encode the contextual semantic information of the video sequence, thereby helping to generate higher quality descriptive text. Experimental results on two public datasets demonstrate that our method achieves good performance on various evaluation metrics.
Xiang LiJinglu WangXiaoli LiYan Lu
Zetao HuangYan LiuChenglong YuJiajia ZhangX. WangShuhan Qi
Hao LiWei WangMengzhu WangHuibin TanLong LanZhigang LuoXinwang LiuKenli Li
Qiang CaiNan KangHaisheng LiJian CaoWenqing LiuRuyi Wan
Junhao ChenSheng LiuRuixiang ChenBingnan GuoFeng Zhang