Random neural networks in the infinite width limit as Gaussian processes

From MaRDI portal
Publication:6138923

DOI10.1214/23-AAP1933arXiv2107.01562MaRDI QIDQ6138923FDOQ6138923


Authors: Boris Hanin Edit this on Wikidata


Publication date: 16 January 2024

Published in: The Annals of Applied Probability (Search for Journal in Brave)

Abstract: This article gives a new proof that fully connected neural networks with random weights and biases converge to Gaussian processes in the regime where the input dimension, output dimension, and depth are kept fixed, while the hidden layer widths tend to infinity. Unlike prior work, convergence is shown assuming only moment conditions for the distribution of weights and for quite general non-linearities.


Full work available at URL: https://arxiv.org/abs/2107.01562







Cites Work


Cited In (1)





This page was built for publication: Random neural networks in the infinite width limit as Gaussian processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6138923)