JOURNAL ARTICLE

A Framework for Enhancing Graph Neural Networks Using Explanations

Abstract

Many real-world problems are modeled as graphs that represent relationships between entities. Graph Neural Networks (GNNs) are a powerful variant of neural networks that combine vertex and edge attributes with node neighborhood structures to infer properties of graph data. Message Passing Neural Networks (MPNNs), a common type of GNN, leverage the expressiveness of the first-order Weisfeiler-Leman (1-WL) algorithm for learning representations for classification tasks. However, 1-WL has known limitations in expressiveness and these limitations pose serious limitations on GNN performance. Separately, eXplainable Artificial Intelligence (XAI) is a sub-field of Machine Learning focused on addressing the “black-box” nature of neural networks. Several projects, such as GNNExplainer, have explored providing post-hoc explanations for GNN predictions. This current work combines XAI methods with graph mining to develop a computational framework to improve GNN performance. The following are the main themes of this work: (1) A new computational framework called Explanation Enhanced Graph Learning (EEGL) to address the performance limitations of GNNs. We achieve this by annotating the input with relevant local structural information based on explanation artifacts and graph mining. Through experiments, we show that data annotated in this way results in higher model performance. (2) We study four different types of noise in our synthetic data and their effects on GNN learnability. We then show that EEGL can mitigate these adverse effects leading to improved performance even in noisy data. (3) GNNs as Logical Classifiers: Logical characterization involves a structured way to analyze and define the expressiveness of GNN models, e.g., “learning a query,” meaning learning a node classification problem in a unified manner across all graphs using a single logic formula. Through experiments, we examine the inductive learning characteristics of GNNs and the models’ ability to generalize on graphs that encode the same logical rule across structurally diverse graphs. (4) Philosophy of Science Perspective on Explainable AI: We briefly explore the philosophy of science perspective on Explainable AI. In this study, we discuss a high-level framework called ExpSpec for contextualizing requirements and defining a set of requirements for explanations.

Keywords:
Leverage (statistics) Artificial neural network Graph Knowledge graph Node (physics) Feature learning

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.38
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Cell Image Analysis Techniques
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Biophysics
Species Distribution and Climate Change
Physical Sciences →  Environmental Science →  Ecological Modeling
Research Data Management Practices
Physical Sciences →  Computer Science →  Information Systems

Related Documents

JOURNAL ARTICLE

A Framework for Enhancing Graph Neural Networks Using Explanations

Naik, Harish Ganapati

Journal:   OPAL (Open@LaTrobe) (La Trobe University) Year: 2025
JOURNAL ARTICLE

SEEN: Seen: Sharpening Explanations for Graph Neural Networks Using Explanations From Neighborhoods

Hyeoncheol ChoYoungrock OhEunjoo Jeon

Journal:   Advances in Artificial Intelligence and Machine Learning Year: 2023 Vol: 03 (02)Pages: 1165-1179
BOOK-CHAPTER

Game Theoretic Explanations for Graph Neural Networks

Ataollah KamalCéline RobardetMarc Plantevit

Communications in computer and information science Year: 2024 Pages: 217-232
© 2026 ScienceGate Book Chapters — All rights reserved.