Peipei YuanXinge YouHong ChenYingjie WangQinmu PengBin Zou
Sparse additive machines (SAMs) have shown competitive performance on variable selection and classification in high-dimensional data due to their representation flexibility and interpretability. However, the existing methods often employ the unbounded or nonsmooth functions as the surrogates of 0-1 classification loss, which may encounter the degraded performance for data with outliers. To alleviate this problem, we propose a robust classification method, named SAM with the correntropy-induced loss (CSAM), by integrating the correntropy-induced loss (C-loss), the data-dependent hypothesis space, and the weighted -norm regularizer ( ) into additive machines. In theory, the generalization error bound is estimated via a novel error decomposition and the concentration estimation techniques, which shows that the convergence rate can be achieved under proper parameter conditions. In addition, the theoretical guarantee on variable selection consistency is analyzed. Experimental evaluations on both synthetic and real-world datasets consistently validate the effectiveness and robustness of the proposed approach.
Xiaohan ZhengLi ZhangLeilei Yan
Yingjie WangXin TangHong ChenTianjiao YuanYanhong ChenHan Li
Hong ChenGuo Chang-yingHuijuan XiongYingjie Wang
Liang-Rui RenYing-Lian GaoJin‐Xing LiuJunliang ShangChun-Hou Zheng