Xue LiTengfei LiangYi JinTao WangYidong Li
Unsupervised person re-identification (ReID) is a challenging task without\ndata annotation to guide discriminative learning. Existing methods attempt to\nsolve this problem by clustering extracted embeddings to generate pseudo\nlabels. However, most methods ignore the intra-class gap caused by camera style\nvariance, and some methods are relatively complex and indirect although they\ntry to solve the negative impact of the camera style on feature distribution.\nTo solve this problem, we propose a camera-aware style separation and\ncontrastive learning method (CA-UReID), which directly separates camera styles\nin the feature space with the designed camera-aware attention module. It can\nexplicitly divide the learnable feature into camera-specific and\ncamera-agnostic parts, reducing the influence of different cameras. Moreover,\nto further narrow the gap across cameras, we design a camera-aware contrastive\ncenter loss to learn more discriminative embedding for each identity. Extensive\nexperiments demonstrate the superiority of our method over the state-of-the-art\nmethods on the unsupervised person ReID task.\n
Yuxuan LiuHongwei GeLiang SunYaqing Hou
Jongmin YuJunsik KimMinkyung KimHyeontaek Oh
Xuefeng TaoJun KongMin JiangX. L. Luo