Tao HeLianli GaoJingkuan SongXin WangKejie HuangYuan-Fang Li
Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many network analytics tasks. Moreover, the trained embeddings often require a significant amount of space to store, making storage and processing a challenge, especially as large-scale networks become more prevalent. In this paper, we present a novel semi-supervised network embedding and compression method, SNEQ, that is competitive with state-of-art embedding methods while being far more space- and time-efficient. SNEQ incorporates a novel quantisation method based on a self-attention layer that is trained in an end-to-end fashion, which is able to dramatically compress the size of the trained embeddings, thus reduces storage footprint and accelerates retrieval speed. Our evaluation on four real-world networks of diverse characteristics shows that SNEQ outperforms a number of state-of-the-art embedding methods in link prediction, node classification and node recommendation. Moreover, the quantised embedding shows a great advantage in terms of storage and time compared with continuous embeddings as well as hashing methods.
Jiongqian LiangPeter G. JacobsJiankai SunSrinivasan Parthasarathy
Ylli SadikajJustus RassYllka VelajClaudia Plant
Xingxing LiuKai WangChang‐Dong WangLing Huang
Chaozhuo LiZhoujun LiSenzhang WangYang YangXiaoming ZhangJianshe Zhou
Haodong ZouZhen DuanXinru GuoShu ZhaoJie ChenYanping ZhangJie Tang