Nan ZhangFan XiaoJunlin HouRui-Wei ZhaoXiaobo ZhangRui Feng
Semi-supervised semantic segmentation approaches have drawn much more attention in recent years, which aim to exploit a large amount of unlabeled data together with a small number of labeled data. However, existing models usually regarded segmentation as pixel-wise classification, neglecting global semantic relations among pixels across various images. Moreover, scarce annotated data usually exhibits a biased distribution against the desired one, hindering performance improvement. To address these challenging problems, we propose a novel cross-image distillation framework for semi-supervised semantic segmentation. Specifically, we introduce a relation distillation module to model inter-channel correlations between features of labeled samples and unlabeled samples. In addition, we propose a style distillation strategy to explicitly calibrate the learned feature distributions of labeled and unlabeled data to be aligned. Experimental results on two popular benchmarks demonstrate that our proposed approach achieves superior performance over other state-of-the-art methods. We will release the code soon.
Jianlong YuanJinchao GeZhibin WangYifan Liu
Lanfeng ZhongXin LiaoShaoting ZhangGuotai Wang
Ping LiJunjie ChenLi YuanXianghua XuMingli Song
Linshan WuLeyuan FangXingxin HeHe MinJiayi MaZhun Zhong