Graph Neural Networks (GNNs), neural network architectures targeted to learning representations of graphs, have become a popular learning model for prediction tasks on nodes, graphs and configurations of points, with wide success in practice. This article summarizes a selection of emerging theoretical results on approximation and learning properties of widely used message passing GNNs and higher-order GNNs, focusing on representation, generalization, and extrapolation. Along the way, it summarizes broad mathematical connections.
Boreiri, ZahraMoeini, AliAbedian, Rooholah
Boreiri, ZahraMoeini, AliAbedian, Rooholah
Chun-Yang ZhangZhi-Liang YaoHong-Yu YaoFeng HuangC. L. Philip Chen