JOURNAL ARTICLE

Dual Consistency-Constrained Learning for Unsupervised Visible-Infrared Person Re-Identification

Bin YangJun ChenCuiqun ChenMang Ye

Year: 2023 Journal:   IEEE Transactions on Information Forensics and Security Vol: 19 Pages: 1767-1779   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Unsupervised visible-infrared person re-identification (US-VI-ReID) aims at learning a cross-modality matching model under unsupervised conditions, which is an extremely important task for practical nighttime surveillance to retrieve a specific identity. Previous advanced US-VI-ReID works mainly focus on associating the positive cross-modality identities to optimize the feature extractor by off-line manners, inevitably resulting in error accumulation of incorrect off-line cross-modality associations in each training epoch due to the intra-modality and inter-modality discrepancies. They ignore the direct cross-modality feature interaction in the training process, i.e., the on-line representation learning and updating. Worse still, existing interaction methods are also susceptible to inter-modality differences, leading to unreliable heterogeneous neighborhood learning. To address the above issues, we propose a dual consistency-constrained learning framework (DCCL) simultaneously incorporating off-line cross-modality label refinement and on-line feature interaction learning. The basic idea is that the relations between cross-modality instance-instance and instance-identity should be consistent. More specifically, DCCL constructs an instance memory, an identity memory, and a domain memory for each modality. At the beginning of each training epoch, DCCL explores the off-line consistency of cross-modality instance-instance and instance-identity similarities to refine the reliable cross-modality identities. During the training, DCCL finds credible homogeneous and heterogeneous neighborhoods with on-line consistency between query-instance similarity and query-instance domain probability similarities for feature interaction in one batch, enhancing the robustness against intra-modality and inter-modality variations. Extensive experiments validate that our method significantly outperforms existing works, and even surpasses some supervised counterparts. The source code is available at https://github.com/yangbincv/DCCL .

Keywords:
Computer science Modality (human–computer interaction) Consistency (knowledge bases) Artificial intelligence Feature (linguistics) Identity (music) Feature learning Matching (statistics) Machine learning Natural language processing Pattern recognition (psychology) Mathematics Linguistics

Metrics

26
Cited By
4.73
FWCI (Field Weighted Citation Impact)
65
Refs
0.94
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Automated Road and Building Extraction
Physical Sciences →  Engineering →  Ocean Engineering
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Dual-Semantic Consistency Learning for Visible-Infrared Person Re-Identification

Yiyuan ZhangYuhao KangSanyuan ZhaoJianbing Shen

Journal:   IEEE Transactions on Information Forensics and Security Year: 2022 Vol: 18 Pages: 1554-1565
JOURNAL ARTICLE

Dual-branch manifold information consistency for unsupervised visible–infrared person re-identification

Yanling GaoZhenyu Wang

Journal:   Journal of Visual Communication and Image Representation Year: 2025 Vol: 113 Pages: 104595-104595
JOURNAL ARTICLE

Augmented Dual-Contrastive Aggregation Learning for Unsupervised Visible-Infrared Person Re-Identification

Bin YangMang YeJun ChenZesen Wu

Journal:   Proceedings of the 30th ACM International Conference on Multimedia Year: 2022 Pages: 2843-2851
© 2026 ScienceGate Book Chapters — All rights reserved.