Siyuan WangYinghua WangHongwei LiuYuanshuang Sun
Few-shot synthetic aperture radar (SAR) target classification has received more and more attention in recent years, where most of the existing methods have applied off-the-shelf networks designed for natural images to SAR images, ignoring the special characteristics of SAR data. Therefore, in this article, we propose an attribute-guided multi-scale prototypical network (AG-MsPN) combined with subband decomposition for few-shot SAR target classification, aiming to learn more discriminative features from a few labeled data. Since the SAR images are essentially complex-valued images containing both amplitude and phase information, we implement the subband decomposition of complex-valued SAR images to explore the backscattering variations of targets, thus obtaining more complete descriptions of targets. Then, considering the complementary features extracted by different convolutional layers, based on the prototypical network, a multi-scale prototypical network (MsPN) is proposed to fuse the features of different layers to enhance the discrimination of feature representations, thus relieving the problem of high intra-class diversity and inter-class similarity for the images of SAR targets. Besides, we devise the prior binary attributes of SAR targets and add an extra attribute classification module (ACM) into the MsPN to map the images into the attribute space for classification. During the training phase, the proposed MsPN and the ACM are jointly utilized to realize the target classification in both the feature space and the attribute space, and meanwhile, the model parameters are optimized by the joint loss. Thus, the classification performance of the MsPN is further enhanced under the joint supervision of class label information from a few labeled data and the target attribute information from the prior knowledge. Therefore, we name the proposed method the AG-MsPN. We demonstrate the effectiveness of our proposed AG-MsPN on the Moving and Stationary Target Acquisition and Recognition benchmark dataset, and our method surpasses many other existing methods in the few-shot cases.
Kaihui ChengChule YangXiao LiuNaiyang GuanZhiyuan Wang
Shiji PeiYijing WangJingjing MaXu TangYuqun Yang
Jingkai WuWeijie JiangZhiyong HuangQifeng LinQinghai ZhengYi LiangYuanlong Yu