Xueliang WangWenqi HuangWenming YangQingmin Liao
Few-shot semantic segmentation aims to learn new knowledge rapidly with very few annotated data to segment novel classes. Recent methods follow a metric learning framework with prototypes for foreground representation [1]. However, representing support images by one or more prototypes may face problems caused by inadequate representation for segmentation, noise in complex scenes, and close semantic relation to background features. We propose a Spatial Correlation Fusion Network(SCFNet) for few-shot segmentation to address the issues. Firstly, to better capture fine-grained features, we design a Spatial Correlation Fusion module to address the loss of spatial information in support images, thus improving the performance of Few-shot segmentation. Secondly, a Prototype Contrastive Transformation(PCT) module is proposed to learn a transformation matrix for the prototype, which is capable of alleviating close semantic information and noise by adopting transformation loss. Experiments on PASCAL-5 i [2] and COCO-20 i [3] validate the effectiveness of our network for few-shot semantic segmentation and show our approach achieves state-of-the-art results.
Haolan HeXianguo DongXiaofei ZhouBo WangJiyong Zhang
Yanan WangXiangtao TianGuoqiang Zhong
Wei AoShunyi ZhengYan MengZhi Gao
Aiping YangShuo WangZijia SangYaran Zhou
Xiaoliu LuoZhao DuanTaiping Zhang