Understanding the message passing in graph neural networks via power iteration clustering

From MaRDI portal
Publication:6078751

DOI10.1016/J.NEUNET.2021.02.025zbMATH Open1521.68138arXiv2006.00144MaRDI QIDQ6078751FDOQ6078751


Authors: Xue Li, Yuanzhi Cheng Edit this on Wikidata


Publication date: 28 September 2023

Published in: Neural Networks (Search for Journal in Brave)

Abstract: The mechanism of message passing in graph neural networks (GNNs) is still mysterious. Apart from convolutional neural networks, no theoretical origin for GNNs has been proposed. To our surprise, message passing can be best understood in terms of power iteration. By fully or partly removing activation functions and layer weights of GNNs, we propose subspace power iteration clustering (SPIC) models that iteratively learn with only one aggregator. Experiments show that our models extend GNNs and enhance their capability to process random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in design and define a lower limit for model evaluation by a random aggregator of message passing. Our findings push the boundaries of the theoretical understanding of neural networks.


Full work available at URL: https://arxiv.org/abs/2006.00144




Recommendations




Cites Work


Cited In (13)





This page was built for publication: Understanding the message passing in graph neural networks via power iteration clustering

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6078751)