In recent years, convolutional neural networks have achieved significant success in computer vision tasks. However, the deployment of these algorithms remains challenging. Knowledge distillation (KD) as a type of important method enables a tiny model to extract helpful information from a large model. Most existing KD methods based semantic segmentation aim to align predicted maps in the spatial domain, but channel distillation also may help to improve segmentation performance. Additionally, pairwise pixel affinity provides efficiently structured reasoning for semantic segmentation. Motivated by these considerations, we propose a novel Channel Affinity KD (CAKD) framework for semantic segmentation that focuses on channel and cross-channel affinity relationship distillation to better align the distribution of the student and teacher models. Extensive experiments demonstrate that our proposed approach outperforms state-of-the-art KD methods on Cityscapes, Pascal VOC, and ADE20k datasets.
Ayoub KarineThibault NapoléonMaher Jridi
Bokyeung LeeKyungdeuk KoJonghwan HongHanseok Ko
Yifan LiuKe ChenChris LiuZengchang QinZhenbo LuoJingdong Wang
Chen WangJiang ZhongQizhu DaiYafei QiQien YuFengyuan ShiRongzhen LiXue LiBin Fang
Hyejin ParkKeon-Hee AhnHyesong ChoiDongbo Min