Differing from conventional person Re-ID task, Visible-Infrared person re-identification aims to retrieve and match the same identity's images from both visible and infrared modalities. This task can address the limitation of RGB-based conventional person Re-ID in dark environments. The channel and modality variance between RGB and infrared (IR) images are two key challenges for Cross-modality person re-identification. Existing works mainly focus on using single-stream network learning modality-shared features by embedding different modalities into the same feature space, which fits modality features to increase intra-class cross-modality similarity. However, these methods can not effectively learn the same-modality inter-class differences. In this paper, we propose an Attention-based Dual Stream Modality-aware method (ADSM) to solve this problem. Our method contains two parts: 1) the attention mechanism is introduced in our method to learn the inter-classes differences under the same modality, 2) a dual stream network based on attention and Resnet is designed to fuse the cross-modality information. Extensive experiments have been made on two public cross-modality person Re-ID datasets SYSU-MM01 and RegDB, experimental results show that the performance of our method outperforms the current state-of-the-art methods by a wide margin, which confirms the superiority of our model.
Zhenyu CuiJiahuan ZhouYuxin Peng
Meibin QiSuzhi WangGuanghong HuangJianguo JiangJingjing WuCuiqun Chen