Theory of graph neural networks: representation and learning

From MaRDI portal
Publication:6200219

DOI10.4171/ICM2022/162arXiv2204.07697OpenAlexW4389775256MaRDI QIDQ6200219FDOQ6200219

Stefanie Jegelka

Publication date: 22 March 2024

Published in: International Congress of Mathematicians (Search for Journal in Brave)

Abstract: Graph Neural Networks (GNNs), neural network architectures targeted to learning representations of graphs, have become a popular learning model for prediction tasks on nodes, graphs and configurations of points, with wide success in practice. This article summarizes a selection of the emerging theoretical results on approximation and learning properties of widely used message passing GNNs and higher-order GNNs, focusing on representation, generalization and extrapolation. Along the way, it summarizes mathematical connections.


Full work available at URL: https://arxiv.org/abs/2204.07697







Cites Work






This page was built for publication: Theory of graph neural networks: representation and learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6200219)