Yusheng TaoJian ZhangTianquan ChenYuqing WangYuesheng Zhu
Unsupervised Re-identification (Re-ID) methods have been dominated by convolutional neural networks (CNN) for many years. Most of these current methods apply pseudo-label-based contrastive learning (CL) and achieve great progress. However, they have limited capacity to represent global fea-tures, suffer from severe performance drops when training with limited computing resources, and are unable to effectively use pseudo-label information when training with CL. To tackle these problems, we propose a Transformer-based Contrastive Learning (TransCL) method to enhance the performance of CL and improve the feature representation ability of Re-ID, in which a batch and memory contrast (BMC) strategy is developed to optimize multi-level CL tasks concurrently to fully use the pseudo-label information. Additionally, a GCN aggregated clustering (GAC) scheme is designed to assist in generating more effective pseudo labels for CL. Extensive experimental results indicate that GAC and BMC work with vision transformer (ViT) achieves better training performance and enhances the representation ability of the Re-ID model. TransCL surpasses the state-of-the-art CNN method by 8.0% in mAP on the challenging MSMT17 dataset.
Wen QinYongxia LiJianguang ZhangXianbin WenJiajia GuoQi Guo
Tongzhen SiFazhi HeZhong ZhangYansong Duan
Xudong LiJianming WangYukuan Sun
G. F. CaoQing TangKang-Hyun Jo