Multi-view network embedding aims to learn low-dimensional representation vectors for nodes while preserving multiple relationships between nodes. It can substantially reduce downstream network analysis tasks' time and space complexity. Although previous works have achieved great performance, they suffer from two limitations: (1) they only preserve the network structure and ignore the semantic level information; (2) they only focus on intra-view signals and ignore the powerful influence of inter-view signals. These limitations highlight the need for more comprehensive approaches to multi-view network embedding that can effectively capture the structure and semantic information, as well as the influence of inter-view signals. A new framework, Multi-view Network Embedding with Structure and Semantic Contrastive Learning (MNE-SSCL), is proposed to address these limitations. It can learn high-quality low-dimensional node embeddings in both intra-veiw and interview, while preserving the structure and semantic information simultaneously. Extensive experiments on three real datasets show that MNE-SSCL outperforms the state-of-the-art methods.
Mingjie ZhangDingwen WangHongrun WuYuanxiang LiZhenglong Xiang
Qi LiWenping ChenZhaoxi FangChangtian YingChen Wang
Hui YuHui-Xiang BianZi-Ling ChongZun LiuJian‐Yu Shi
Zhongming HanChenye ZhengDan LiuDagao DuanWeijie Yang