JOURNAL ARTICLE

Task Variance Regularized Multi-Task Learning

Yuren MaoZekai WangWeiwei LiuXuemin LinWenbin Hu

Year: 2022 Journal:   IEEE Transactions on Knowledge and Data Engineering Pages: 1-14   Publisher: IEEE Computer Society

Abstract

Multi-task Learning (MTL), which involves the simultaneous learning of multiple tasks, can achieve better performance than learning each task independently. It has achieved great success in various applications, ranging from Computer Vision (CV) to Natural Language Processing (NLP). In MTL, the losses of the including tasks are jointly optimized. However, it is common for these tasks to be competing. When the tasks are competing, minimizing the losses of some tasks increases the losses of others, which accordingly increases the task variance (variance between the task-specific loss); furthermore, it induces under-fitting in some tasks and over-fitting in others, which degenerates the generalization performance of an MTL model. To address this issue, it is necessary to control the task variance; thus, task variance regularization is a natural choice. While intuitive, task variance regularization remains unexplored in MTL. Accordingly, to fill this gap, we study the generalization error bound of MTL through the lens of task variance and propose the task variance matters the generalization performance of MTL. Furthermore, this paper investigates how the task variance might be effectively regularized, and consequently proposes a multi-task learning method based on adversarial multi-armed bandit. The proposed method, dubbed BanditMTL, regularizes the task variance by means of a mirror gradient ascent-descent algorithm. Adopting BanditMTL both in CV and NLP applications is found to achieve state-of-the-art performance. The results of extensive experiments back up our theoretical analysis and validate the superiority of our proposals.

Keywords:
Computer science Regularization (linguistics) Variance (accounting) Artificial intelligence Task (project management) Generalization Machine learning Multi-task learning Generalization error Task analysis Artificial neural network Mathematics

Metrics

5
Cited By
0.98
FWCI (Field Weighted Citation Impact)
82
Refs
0.74
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Machine Learning and ELM
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.