JOURNAL ARTICLE

Dual-Adversarial Representation Disentanglement for Visible Infrared Person Re-Identification

Ziyu WeiXi YangNannan WangXinbo Gao

Year: 2023 Journal:   IEEE Transactions on Information Forensics and Security Vol: 19 Pages: 2186-2200   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Heterogeneous pedestrian images are captured by visible and infrared cameras with different spectrums, which play an important role in night-time video surveillance. However, visible infrared person re-identification (VI-REID) is still a challenging problem due to the considerable cross-modality discrepancies. To extract modality-invariant features which are discriminative for the person identity, recent studies are inclined to regard modality-specific features as noise and discard them. Actually, the modality-specific characteristics containing background and color information are indispensable for learning modality-shared features. In this paper, we propose a novel Dual-Adversarial Representation Disentanglement (DARD) model to separate modality-specific features from tangled pedestrian representations and effectively learn the robust modality-invariant representations. Specifically, our method employs dual-adversarial learning, incorporating image-level channel exchange and feature-level magnitude change to introduce variations in modality-specific representations. This deliberate perturbation raises the learning difficulty for the model to learn modality-shared features. Simultaneously, to control the changing scope of modality-specific features, bi-constrained noise alleviation is introduced during adversarial learning, keeping the balance of feature generation and adversary. The proposed dual-adversarial learning methodology enhances the robustness against cross-modality visual discrepancy and strengthens the discriminative power of the learned modality-shared representations without introducing additional network parameters. This improvement further elevates the retrieval performance of VI-REID. Extensive experiments with insightful analysis on two cross-modality re-identification datasets verify the effectiveness and superiority of the proposed DARD method.

Keywords:
Discriminative model Computer science Modality (human–computer interaction) Adversarial system Artificial intelligence Robustness (evolution) Reinforcement learning Feature learning Pattern recognition (psychology) Computer vision Deep learning Machine learning

Metrics

37
Cited By
6.73
FWCI (Field Weighted Citation Impact)
76
Refs
0.97
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Fire Detection and Safety Systems
Physical Sciences →  Engineering →  Safety, Risk, Reliability and Quality
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.