Guangji BaiJohnny TorresJunxiang WangLiang ZhaoCristina L. AbadCarmen Vaca
Multi-task learning is a framework that enforces different tasks to share their knowledge to improve the generalization performance. It is a long-standing active domain that strives to handle several core issues including which tasks are correlated and similar and how to share the knowledge among correlated tasks. Existing works usually do not distinguish the polarity and magnitude of feature weights and commonly rely on linear correlation, due to three major technical challenges in: 1) optimizing the models that regularize feature weight polarity, 2) deciding whether to regularize sign or magnitude, 3) identifying which tasks should share their sign and/or magnitude patterns. To address them, this paper proposes a new multi-task learning framework that can regularize feature weight signs across tasks, beyond the conventional framework for feature weight regularization. We innovatively formulate such sign-regularization problem as a biconvex inequality constrained optimization upon the multiplications among feature weights with slacks. We then propose a new efficient algorithm for the optimization with theoretical guarantees on generalization performance and convergence. Extensive experiments on multiple datasets show the proposed methods' effectiveness, efficiency, and reasonableness of the regularized feature weighted patterns.
Theodoros EvgeniouMassimiliano Pontil
Yuren MaoZekai WangWeiwei LiuXuemin LinWenbin Hu
Kourosh MeshgiMaryam Sadat MirzaeiSatoshi Sekine
Peipei YangXu-Yao ZhangKaizhu HuangCheng‐Lin Liu