JOURNAL ARTICLE

Robust Lifelong Multi-task Multi-view Representation Learning

Abstract

The state-of-the-art multi-task multi-view learning (MTMV) tackles the learning scenario where multiple tasks are associated with each other via multiple shared feature views. However, in online practical scenarios where the learning tasks have heterogeneous features collected from multiple views, e.g., multiple sources, the state-of-the-arts with single view cannot work well. To tackle this issue, in this paper, we propose a Robust Lifelong Multi-task Multi-view Representation Learning (rLM 2 L) model to accumulate the knowledge from online multi-view tasks. More specifically, we firstly design a set of view-specific libraries to maintain the intra-view correlation information of each view, and further impose an orthogonal promoting term to enforce libraries to be as independent as possible. When online new multi-view task is coming, rLM 2 L model decomposes all views of the new task into a common view-invariant space by transferring the knowledge of corresponding library. In this view-invariant space, capturing underlying inter-view correlation and identifying task-specific views for the new task are jointly employed via a robust multi-task learning formulation. Then the view-specific libraries can be refined over time to keep on improving across all tasks. For the model optimization, the proximal alternating linearized minimization algorithm is adopted to optimize our nonconvex model alternatively to achieve lifelong learning. Finally, extensive experiments on benchmark datasets shows that our proposed rLM 2 L model outperforms existing lifelong learning models, while it can discover task-specific views from sequential multi-view task with less computational burden.

Keywords:
Computer science Multi-task learning Task (project management) Artificial intelligence Representation (politics) Feature learning Invariant (physics) Set (abstract data type) Machine learning Mathematics

Metrics

16
Cited By
2.98
FWCI (Field Weighted Citation Impact)
20
Refs
0.92
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Sparse and Compressive Sensing Techniques
Physical Sciences →  Engineering →  Computational Mechanics

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.