Haoxuan ZhangYafang LiJianqiang Li
Abstract Graph contrastive learning (GCL) has made significant progress in unsupervised graph representation learning. However, most methods rely on manually designed augmentations, which introduce high computational overhead and risk semantic inconsistency—especially when perturbations distort graph structure or corrupt key features. To overcome these issues, we propose SCOPE (Structure-aware and COnsistency-Preserved graph contrastive lEarning), an augmentation-free framework that exploits intrinsic graph information to define meaningful contrastive objectives. Specifically, we propose a structure-aware positive sampling strategy using partial absorption scores to select topologically similar nodes as positives, ensuring semantic relevance without artificial noise. Meanwhile, a feature-driven KNN graph serves as an auxiliary view, and consistency between embeddings from the original and KNN graphs is enforced via a cross-view alignment loss. This dual approach removes the need for stochastic augmentations while preserving structural and attribute semantics. We evaluate SCOPE on six benchmark datasets, consistently achieving competitive or superior results compared to other contrastive learning methods. These results highlight the effectiveness of structure-aware sampling and consistency preservation in improving the stability and efficiency of contrastive learning on graphs.
Weixin BuXiaofeng CaoYizhen ZhengShirui Pan
Yuena LinGengyu LyuHaichun CaiDeng-Bao WangHaobo WangZhen Yang
Dong ChenXiang ZhaoWei WangZhen TanWeidong Xiao
Jinhao CuiHeyan ChaiXu YangY. DingBinxing FangQing Liao