Multi-kernel k-means clustering (MKC) aims to learn a composite kernel from multiple precomputed basic kernels to better reflect the data distribution. In the existing MKC models, the optimal composite kernel is linearly combined by basic kernels with varying weights, subject to different constraints. While some state-of-the-art models have achieved satisfactory clustering performance, they often do so at the expense of model interpretability. To address this issue, this paper proposes a new Multi-Kernel K-means Clustering model with maximized Entropy regularization (MKKC-E). In the new model, convex combination of basic kernels is used to learn the optimal composite kernel to enhance interpretability. Meanwhile, an entropy regularization term is introduced to prevent the kernel weights from becoming overly sparse, thereby improving the model’s robustness. Experimental results demonstrate that the proposed model’s interpretability and robustness are validated on synthetic datasets, while its superior clustering performance is confirmed on benchmark datasets. In conclusion, the MKKC-E model not only achieves excellent clustering performance but also offers significant interpretability.
Jiaji QiuHuiying XuXinzhong ZhuMichael Adjeisah
Miaomiao LiYi ZhangSuyuan LiuZhe LiuXinzhong Zhu
Yan ChenBingbing JiangPeng ZhouLei DuanYuhua QianLiang Du
Jingtao HuMiaomiao LiEn ZhuSiwei WangXinwang LiuYongping Zhai