In recent years, graph contrastive learning has achieved promising node classification accuracy using graph neural networks (GNNs), which can learn representations in an unsupervised manner. However, such representations cannot be generalized to unseen novel classes with only few-shot labeled samples in spite of exhibiting good performance on seen classes. In order to assign generalization capability to graph contrastive learning, we propose multimodal graph meta contrastive learning (MGMC) in this paper, which integrates multimodal meta learning into graph contrastive learning. On one hand, MGMC accomplishes effectively fast adapation on unseen novel classes by the aid of bilevel meta optimization to solve few-shot problems. On the other hand, MGMC can generalize quickly to a generic dataset with multimodal distribution by inducing the FiLM-based modulation module. In addition, MGMC incorporates the lastest graph contrastive learning method that does not rely on the onstruction of augmentations and negative examples. To our best knowledge, this is the first work to investigate graph contrastive learning for few-shot problems. Extensieve experimental results on three graph-structure datasets demonstrate the effectiveness of our proposed MGMC in few-shot node classification tasks.
Kang LiuFeng XueDan GuoPeijie SunShengsheng QianRichang Hong
Yu-Chao PingShu-Qin WangZi-Yi YangYong-Quan DongMeng-Xiang HuPei-Lin Zhang
Ningbo HuangGang ZhouMeng ZhangShiyu WangShunhang Li
Yonghao LiuMengyu LiXiming LiLan HuangFausto GiunchigliaYanchun LiangXiaoyue FengRenchu Guan
R. LiuZongsheng CaoZhe WuQianqian XuQingming Huang