JOURNAL ARTICLE

Semi-supervised Semantic Segmentation with Mutual Knowledge Distillation

Abstract

Consistency regularization has been widely studied in recent semi- supervised semantic segmentation methods, and promising per- formance has been achieved. In this work, we propose a new con- sistency regularization framework, termed mutual knowledge dis- tillation (MKD), combined with data and feature augmentation. We introduce two auxiliary mean-teacher models based on consis- tency regularization. More specifically, we use the pseudo-labels generated by a mean teacher to supervise the student network to achieve a mutual knowledge distillation between the two branches. In addition to using image-level strong and weak augmentation, we also discuss feature augmentation. This involves considering various sources of knowledge to distill the student network. Thus, we can significantly increase the diversity of the training samples. Experiments on public benchmarks show that our framework out- performs previous state-of-the-art (SOTA) methods under various semi-supervised settings. Code is available at https://github.com/jianlong-yuan/semi-mmseg.

Keywords:
Regularization (linguistics) Computer science Segmentation Consistency (knowledge bases) Artificial intelligence Machine learning Distillation Feature (linguistics) Pattern recognition (psychology)

Metrics

17
Cited By
3.09
FWCI (Field Weighted Citation Impact)
31
Refs
0.90
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.