Unsupervised domain adaptation in person re-identification (ReID) poses significant challenges due to domain shift, often leading to substantial performance degradation when models are deployed in new environments. To address this issue, we propose a novel framework that synergistically combines a bidirectional image translation network with a feature extraction network, augmented by an innovative id consistency loss. Our method leverages generative adversarial networks for image translation, ensuring style consistency between source and target domains, while preserving identity features. Extensive experiments on benchmark datasets demonstrate the superiority of our approach, setting new state-of-the-art results on multiple challenging tasks. The proposed framework represents a significant advancement in unsupervised domain adaptive person re-identification, with potential implications for real-world surveillance and security applications.
Mingke YangJing ZhaoDa HuangJi Wang
Hamza RamiMatthieu OspiciStéphane Lathuilière
Jiajie TianTeng ZhuYan LiRui LiYi WuJianping Fan
Wei ZhangPeijun YeDihu ChenTao Su