Jianwei FanQing XiongJian LiYuanxin Ye
Establishing feature correspondences between multimodal remote sensing images is an essential task for realizing diverse applications. Conventional matching methods, which employ gradient or phase congruency (PC) for feature detection and description, produce limited performance when images suffer from strong noises and intensity differences. In this study, we propose a novel modality-invariant structural feature representation (MISFR) method for multimodal remote sensing image matching. First, a maximal/minimal enhanced PC moment (EPCM) representation is designed by incorporating the PC with a multiscale relative total variation (RTV) model for feature detection and description. The EPCM integrates the advantages of these two models to exploit intrinsic structural features while providing robustness against modality variations. Then, to improve the feature stability and repeatability, an aggregation structural feature detector (ASFD) is proposed, in which the corner and edge points are separately extracted on the minimal and maximum EPCMs. Moreover, an adaptive binning-based log-polar descriptor is constructed on the maximal EPCM, named enhanced features of orientated PC (EOPC), to robustly characterize feature points. Various experiments on a series of multimodal remote sensing images demonstrate that our MISFR significantly improves the matching performance compared with four state-of-the-art approaches.
Yepeng LiuWei-Yu LaiYuliang GuGui-Song XiaBo DuYongchao Xu
Qiaoliang LiGuoyou WangJianguo LiuShaobo Chen
Shaochen ZhangBin LuoJun LiuZhitao FuXin SuShiliang Zhu
Qiaoliang LiHuisheng ZhangTianfu Wang
Xuecong LiuXichao TengYijie BianZhang LiQifeng Yu