JOURNAL ARTICLE

Structure-Aware and Consistency-Preserved Graph Contrastive Learning without Augmentation

Haoxuan ZhangYafang LiJianqiang Li

Year: 2025 Journal:   Journal of Physics Conference Series Vol: 3145 (1)Pages: 012032-012032   Publisher: IOP Publishing

Abstract

Abstract Graph contrastive learning (GCL) has made significant progress in unsupervised graph representation learning. However, most methods rely on manually designed augmentations, which introduce high computational overhead and risk semantic inconsistency—especially when perturbations distort graph structure or corrupt key features. To overcome these issues, we propose SCOPE (Structure-aware and COnsistency-Preserved graph contrastive lEarning), an augmentation-free framework that exploits intrinsic graph information to define meaningful contrastive objectives. Specifically, we propose a structure-aware positive sampling strategy using partial absorption scores to select topologically similar nodes as positives, ensuring semantic relevance without artificial noise. Meanwhile, a feature-driven KNN graph serves as an auxiliary view, and consistency between embeddings from the original and KNN graphs is enforced via a cross-view alignment loss. This dual approach removes the need for stochastic augmentations while preserving structural and attribute semantics. We evaluate SCOPE on six benchmark datasets, consistently achieving competitive or superior results compared to other contrastive learning methods. These results highlight the effectiveness of structure-aware sampling and consistency preservation in improving the stability and efficiency of contrastive learning on graphs.

Keywords:

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
8
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.