Qiang HuangMakoto YamadaYuan TianDinesh SinghYi Chang
Graph structured data has wide applicability in various domains such as physics, chemistry, biology, computer vision, and social networks, to name a few. Recently, graph neural networks (GNN) were shown to be successful in effectively representing graph structured data because of their good performance and generalization ability. However, explaining the effectiveness of GNN models is a challenging task because of the complex nonlinear transformations made over the iterations. In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method. GraphLIME is a generic GNN-model explanation framework that learns a nonlinear interpretable model locally in the subgraph of the node being explained. Through experiments on two real-world datasets, the explanations of GraphLIME are found to be of extraordinary degree and more descriptive in comparison to the existing explanation methods.
Amir Hossein Akhavan RahnamaLaura Galera AlfaroZhendong WangMaria Movin
Bin WangWenbin PeiBing XueJun Zhang
Mohammad NagahisarchoghaeiMirhossein Mousavi KarimiShahram RahimiLogan CumminsGhodsieh Ghanbari
Elakkiya ElangoShreenidhi Krishnamurthy SubramaniyanHarishchander AnandaramBalasubramanian Shanmuganathan
Przemysław BiecekTomasz Burzykowski