Graphs neural networks (GNNs) learn node features by aggregating and\ncombining neighbor information, which have achieved promising performance on\nmany graph tasks. However, GNNs are mostly treated as black-boxes and lack\nhuman intelligible explanations. Thus, they cannot be fully trusted and used in\ncertain application domains if GNN models cannot be explained. In this work, we\npropose a novel approach, known as XGNN, to interpret GNNs at the model-level.\nOur approach can provide high-level insights and generic understanding of how\nGNNs work. In particular, we propose to explain GNNs by training a graph\ngenerator so that the generated graph patterns maximize a certain prediction of\nthe model.We formulate the graph generation as a reinforcement learning task,\nwhere for each step, the graph generator predicts how to add an edge into the\ncurrent graph. The graph generator is trained via a policy gradient method\nbased on information from the trained GNNs. In addition, we incorporate several\ngraph rules to encourage the generated graphs to be valid. Experimental results\non both synthetic and real-world datasets show that our proposed methods help\nunderstand and verify the trained GNNs. Furthermore, our experimental results\nindicate that the generated graphs can provide guidance on how to improve the\ntrained GNNs.\n
Yong-Min ShinSunwoo KimWon-Yong Shin
Dongsheng LuoTianxiang ZhaoWei ChengDongkuan XuFeng HanWenchao YuXiao LiuHaifeng ChenX. D. Zhang
Youmin ZhangQun LiuGuoyin WangLili YangLi Liu
Li LiuPengyu WanF. Z. ZhangYoumin ZhangQun LiuGuoyin Wang
Qiang HuangMakoto YamadaYuan TianDinesh SinghYi Chang