Lingwu MengJing WangYang YangLiang Xiao
Remote sensing image captioning aims to generate meaningful and grammatically accurate sentences for remote sensing images. However, in comparison to natural image captioning, remote sensing image captioning encounters additional challenges due to the unique characteristics of remote sensing images. The first challenge arises from the abundance of objects present in these images. As the number of objects increases, it becomes increasingly difficult to determine the main focus of the description. Moreover, the objects in remote sensing images often share similar appearances, which further complicates the generation of accurate descriptions. To overcome these challenges, we propose a Prior Knowledge-guided Transformer for remote sensing image captioning. Firstly, scene-level and object-level features are extracted in a Multi-level Feature Extraction module. To further refine and enhance the extracted multi-level features, we introduce a Feature Enhancement module. This module utilizes a combination of graph neural networks and attention mechanisms to capture the correlation and difference between different objects or scene regions. Moreover, we propose a Prior Knowledge augmented Attention mechanism to select the objects that are more relevant to the scene regions by establishing the relationships between them. This attention mechanism is seamlessly integrated into the Transformer structure, providing valuable prior knowledge that promotes the caption generation process. Extensive experiments on three remote sensing image captioning datasets verify the superiority of the proposed method. Compared with the baseline methods, the proposed method achieves more impressive performance. The code will be publicly available at https://github.com/One-paper-luck/PKG-Transformer.
Binze WangJiangbo XiXingrun WangJianwu FangWandong JiangDashuai XieYaobing Xiang
Daisong YanWenxin YuZhiqiang ZhangJun Gong
Boyang ZhangWenbo XuYu LiPengke LiHaitao Jia