Due to the strong complementary nature of synthetic aperture radar (SAR) and optical imagery in remote sensing, matching SAR images with optical images enables the complementary extraction of ground surface feature information. This approach is beneficial for addressing tasks such as change detection, target recognition, and land cover classification. With the development of deep learning, most current models for SAR-optical image matching utilize siamese neural networks and have achieved good matching performance. However, in practical applications, test data is often unknown, and there may be differences between training and test data. A matching model that performs well on the training dataset may not generalize well to the test data. Domain generalization research for heterogeneous image matching aims to improve the generalization performance of matching models on unknown test domains, and this task has practical significance and application prospects. In this paper, we propose a contrastive learning-based SAR-optical image matching model specifically designed to enhance the generalization performance of matching models trained on multiple datasets. When dealing with cross-domain scenarios, enforcing feature consistency of matching points preserves more information conducive to matching. We propose a contrastive learning method to limit the feature consistency of matching points, thereby improving the generalization performance of cross-source image matching models.
Lloyd Haydn HughesNina MerkleTatjana BürgmannStefan AuerMichael Schmitt
Lloyd Haydn HughesMichael Schmitt
Chengjin AnFanzhi CaoPu WangTianxin Shi
Yuwei FangRui LiuYini PengJianjun GuanDuidui LiXin Tian
Xu WangTian YeRajgopal KannanViktor K. Prasanna