Consistency regularization has been widely studied in recent semi- supervised semantic segmentation methods, and promising per- formance has been achieved. In this work, we propose a new con- sistency regularization framework, termed mutual knowledge dis- tillation (MKD), combined with data and feature augmentation. We introduce two auxiliary mean-teacher models based on consis- tency regularization. More specifically, we use the pseudo-labels generated by a mean teacher to supervise the student network to achieve a mutual knowledge distillation between the two branches. In addition to using image-level strong and weak augmentation, we also discuss feature augmentation. This involves considering various sources of knowledge to distill the student network. Thus, we can significantly increase the diversity of the training samples. Experiments on public benchmarks show that our framework out- performs previous state-of-the-art (SOTA) methods under various semi-supervised settings. Code is available at https://github.com/jianlong-yuan/semi-mmseg.
Ping LiJunjie ChenLi YuanXianghua XuMingli Song
Xiaoqiang LuLicheng JiaoLingling LiFang LiuXu LiuShuyuan Yang
Nan ZhangFan XiaoJunlin HouRui-Wei ZhaoXiaobo ZhangRui Feng
Yushan XieYuejia YinQingli LiYan Wang