Yilei LiangHua HanLi HuangChunyuan Wang
Visible–infrared person re-identification (VI-ReID) is a current focused area in the field of re-identification. In order to reduce the gap between two modalities in VI-ReID and improve recognition accuracy, this paper proposes a four-stream network and nonsignificant feature learning (FS-NSF) method for VI-ReID. First, the dual-intermediate modality images of visible and infrared modalities are generated by two lightweight networks, and the labels are inherited from the visible and infrared images. Second, the ResNet50 backbone network is split in order to reconstruct the network adapted to shared feature learning of the four modalities. Finally, a multi-branch, multi-scale and multi-granularity feature extraction strategy is used to extract both significant and nonsignificant features. The comparison experiments are conducted on SYSU-MM01 dataset and RegDB dataset. The experimental results show that, compared with state-of-the-arts, our method has excellent performance on both datasets, especially on the SYSU-MM01 dataset, with an increase in performance of 1.9–6.28% for each index.
Kunfeng ChenZhisong PanJiabao WangShanshan JiaoZhicheng ZengZhuang Miao
Jiale ZhangBaohua ZhangJiaxing Pan
Hao WangXiaojun BiChangdong Yu
Feng ZhouZhuxuan ChengHaitao YangYifeng SongShengpeng Fu