JOURNAL ARTICLE

Channel Affinity Knowledge Distillation for Semantic Segmentation

Abstract

In recent years, convolutional neural networks have achieved significant success in computer vision tasks. However, the deployment of these algorithms remains challenging. Knowledge distillation (KD) as a type of important method enables a tiny model to extract helpful information from a large model. Most existing KD methods based semantic segmentation aim to align predicted maps in the spatial domain, but channel distillation also may help to improve segmentation performance. Additionally, pairwise pixel affinity provides efficiently structured reasoning for semantic segmentation. Motivated by these considerations, we propose a novel Channel Affinity KD (CAKD) framework for semantic segmentation that focuses on channel and cross-channel affinity relationship distillation to better align the distribution of the student and teacher models. Extensive experiments demonstrate that our proposed approach outperforms state-of-the-art KD methods on Cityscapes, Pascal VOC, and ADE20k datasets.

Keywords:
Pascal (unit) Segmentation Computer science Convolutional neural network Channel (broadcasting) Artificial intelligence Pixel Distillation Image segmentation Machine learning Pattern recognition (psychology) Chemistry

Metrics

1
Cited By
0.18
FWCI (Field Weighted Citation Impact)
26
Refs
0.46
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.