JOURNAL ARTICLE

Nonparametric Clustering-Guided Cross-View Contrastive Learning for Partially View-Aligned Representation Learning

Shengsheng QianDizhan XueJun HuHuaiwen ZhangChangsheng Xu

Year: 2024 Journal:   IEEE Transactions on Image Processing Vol: 33 Pages: 6158-6172   Publisher: Institute of Electrical and Electronics Engineers

Abstract

With the increasing availability of multi-view data, multi-view representation learning has emerged as a prominent research area. However, collecting strictly view-aligned data is usually expensive, and learning from both aligned and unaligned data can be more practicable. Therefore, Partially View-aligned Representation Learning (PVRL) has recently attracted increasing attention. After aligning multi-view representations based on their semantic similarity, the aligned representations can be utilized to facilitate downstream tasks, such as clustering. However, existing methods may be constrained by the following limitations: 1) They learn semantic relations across views using the known correspondences, which is incomplete and the existence of false negative pairs (FNP) can significantly impact the learning effectiveness; 2) Existing strategies for alleviating the impact of FNP are too intuitive and lack a theoretical explanation of their applicable conditions; 3) They attempt to find FNP based on distance in the common space and fail to explore semantic relations between multi-view data. In this paper, we propose a Nonparametric Clustering-guided Cross-view Contrastive Learning (NC3L) for PVRL, in order to address the above issues. Firstly, we propose to estimate the similarity matrix between multi-view data in the marginal cross-view contrastive loss to approximate the similarity matrix of supervised contrastive learning (CL). Secondly, we establish the theoretical foundation for our proposed method by analyzing the error bounds of the loss function and its derivatives between our method and supervised CL. Thirdly, we propose a Deep Variational Nonparametric Clustering (DeepVNC) by designing a deep reparameterized variational inference for Dirichlet process Gaussian mixture models to construct cluster-level similarity between multi-view data and discover FNP. Additionally, we propose a reparameterization trick to improve the robustness and the performance of our proposed CL method. Extensive experiments on four widely used benchmark datasets show the superiority of our proposed method compared with state-of-the-art methods.

Keywords:
Cluster analysis Computer science Artificial intelligence Representation (politics) Nonparametric statistics Feature learning Pattern recognition (psychology) Machine learning Natural language processing Mathematics Statistics

Metrics

3
Cited By
1.59
FWCI (Field Weighted Citation Impact)
68
Refs
0.76
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Face and Expression Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Partially View-Aligned Representation Learning via Cross-View Graph Contrastive Network

Yiming WangDongxia ChangZhiqiang FuJie WenYao Zhao

Journal:   IEEE Transactions on Circuits and Systems for Video Technology Year: 2024 Vol: 34 (8)Pages: 7272-7283
JOURNAL ARTICLE

SMART: Semantic Matching Contrastive Learning for Partially View-Aligned Clustering

Liang PengYixuan YeCheng LiuHangjun CheFei WangZhiwen YuSi WuHau−San Wong

Journal:   IEEE Transactions on Circuits and Systems for Video Technology Year: 2025 Pages: 1-1
JOURNAL ARTICLE

A Clustering-Guided Contrastive Fusion for Multi-View Representation Learning

Guanzhou KeGuoqing ChaoXiaoli WangChenyang XuYongqi ZhuYang Yu

Journal:   IEEE Transactions on Circuits and Systems for Video Technology Year: 2023 Vol: 34 (4)Pages: 2056-2069
© 2026 ScienceGate Book Chapters — All rights reserved.