Helong ZhouYie‐Tarng ChenJie ZhangWen‐Hsien Fang
To lessen the redundancy of convolutional kernels, this paper proposes a new convolutional structure, i.e., weighted kernel sharing convolution (WKSC), which gathers the inputs with the same kernel, so the inputs in each group can share the same convolutional kernel. Also, an extra weighting is imposed for each input channel before the sharing process to manifest its diversity. As a consequence, the number of kernels can be greatly reduced, leading to a reduction of model parameters and the speedup of inference. Moreover, WKSC can be combined with other existing compression models such as depthwise separable convolutions, resulting in a more compressed architecture. Extensive experiments on CIFAR-100 and ImageNet classification demonstrate the effectiveness of the new approach in both computation cost and the parameters required compared with the state-of-the-art works.
Bosheng LiuHongyi LiangJigang WuXiaoming ChenPeng LiuYinhe Han
Jialin LiuFei ChaoChih‐Min LinChangle ZhouChangjing Shang