DISSERTATION

Explainability in Graph Neural Networks

Li, Peibo

Year: 2022 University:   UNSWorks (University of New South Wales, Sydney, Australia)   Publisher: Australian Defence Force Academy

Abstract

Deep neural networks have been predominant in AI applications during the past decade. Inspired by the success of deep learning in image and text domains, graph neural networks (GNNs) have been extensively developed for graphs in various applications. There are various topics in the current study for GNNs that have raised a great interest in the research community. In this thesis, we mainly focus on two of them, explainability and semi-supervised learning for GNNs. Semi-supervised learning is a major task for GNNs and exploring the explainability of GNNs helps us to understand these models better, which also benifits GNN based semi-supervised learning. The first problem is the explainability of GNNs. Similar to all other neural network based models, GNNs suffer from the black-box problem as people cannot understand the mechanism underlying them. To solve this problem, several GNN explainability methods have been proposed to explain the decisions made by GNNs. We conducted comprehensive experimental studies of the state-of-the-art GNN explainability methods based on the existing evaluation metrics. Furthermore, we proposed a new evaluation metric and benchmark the existing GNN explainability with our proposed novel metric on real-world datasets. The second problem is the semi-supervised learning for GNNs. A majority of GNNs studies focus on semi-supervised learning due to the challenge of labeled data shortage in graph-based tasks. To address this challenge, Graph Neural Networks (GNNs) use message passing frameworks to combine information from unlabeled data with labeled data. However, the use of unlabeled data under the message passing framework is indirect in the training process where unlabeled data does not supervise the training process. To tackle this problem, we propose a novel dual-view cooperative training framework, which allows the unlabeled data to directly supervise the training process. We further use a GNN explainability method to justify our framework and provide theoretical analysis.

Keywords:
Artificial neural network Deep learning Benchmark (surveying) Focus (optics) Graph Deep neural networks Economic shortage Metric (unit)

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Explainable Artificial Intelligence (XAI)
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Graph Neural Networks
Physical Sciences →  Computer Science →  Artificial Intelligence
Big Data and Digital Economy
Physical Sciences →  Computer Science →  Information Systems
© 2026 ScienceGate Book Chapters — All rights reserved.