Bo SunYulong ZhangJianan WangChunmao Jiang
Occlusion remains a major challenge in person re-identification, as it often leads to incomplete or misleading visual cues. To address this issue, we propose a dual-branch occlusion-aware network (DOAN), which explicitly and implicitly enhances the model’s capability to perceive and handle occlusions. The proposed DOAN framework comprises two synergistic branches. In the first branch, we introduce an Occlusion-Aware Semantic Attention (OASA) module to extract semantic part features, incorporating a parallel channel and spatial attention (PCSA) block to precisely distinguish between pedestrian body regions and occlusion noise. We also generate occlusion-aware parsing labels by combining external human parsing annotations with occluder masks, providing structural supervision to guide the model in focusing on visible regions. In the second branch, we develop an occlusion-aware recovery (OAR) module that reconstructs occluded pedestrians to their original, unoccluded form, enabling the model to recover missing semantic information and enhance occlusion robustness. Extensive experiments on occluded, partial, and holistic benchmark datasets demonstrate that DOAN consistently outperforms existing state-of-the-art methods.
Xiaokang ZhangYan YanJing‐Hao XueHua YangHanzi Wang
Xiangzeng LiuJianfeng GuoHao ChenQiguang MiaoYue XiRuyi Liu
Xiaokun ZhaoLongfei ZhangXingyong WuGangyi Ding
Guangyu GaoQianxiang WangJing GeYan Zhang
Shujuan WangBochun HuangHuafeng LiGuanqiu QiDapeng TaoZhengtao Yu