Weakly supervised semantic segmentation (WSSS) aims to reduce the annotation costs of training semantic segmentation networks by leveraging weak supervision. Although class labels are a widely used form of weak supervision, annotating all object categories within scene-level images remains both labor-intensive and error-prone, since annotators must exhaustively verify the presence of multiple classes. To overcome this, we propose a novel WSSS framework that relies on only a single positive class label per image as supervision, substantially simplifying the annotation process and enabling more scalable dataset construction. However, using single positive labels introduces challenges, as they can lead to degraded pseudo-semantic masks and hinder network training. To address this, we propose Prediction Score Thresholding (PST) and Permanently-corrected Label Transfer (PLT) to alleviate the degradation problem. Experimental results on PASCAL VOC 2012 and Microsoft COCO 2014 demonstrate that our proposed methods significantly enhance both the quality of the pseudo-semantic mask and the overall WSSS performance, even with extremely sparse supervision. Source code is available at https://github.com/youngwk/SPCL-WSSS
Dongjun HwangHyoseo KimDoyeol BaekHyunbin KimInhye KyeJunsuk Choe
Xinliang ZhangLei ZhuHangzhou HeLujia JinYanye Lu
Sung Hoon YoonHoyong KwonHyeonseong KimKuk-Jin Yoon
Linshan WuZhun ZhongJiayi MaYunchao WeiHao ChenLeyuan FangShutao Li