Convergence rates for shallow neural networks learned by gradient descent

From MaRDI portal
Publication:6137712

DOI10.3150/23-BEJ1605arXiv2107.09550OpenAlexW4388506988MaRDI QIDQ6137712FDOQ6137712


Authors: Alina Braun, Michael Kohler, Sophie Langer, Harro Walk Edit this on Wikidata


Publication date: 16 January 2024

Published in: Bernoulli (Search for Journal in Brave)

Abstract: In this paper we analyze the L2 error of neural network regression estimates with one hidden layer. Under the assumption that the Fourier transform of the regression function decays suitably fast, we show that an estimate, where all initial weights are chosen according to proper uniform distributions and where the weights are learned by gradient descent, achieves a rate of convergence of 1/sqrtn (up to a logarithmic factor). Our statistical analysis implies that the key aspect behind this result is the proper choice of the initial inner weights and the adjustment of the outer weights via gradient descent. This indicates that we can also simply use linear least squares to choose the outer weights. We prove a corresponding theoretical result and compare our new linear least squares neural network estimate with standard neural network estimates via simulated data. Our simulations show that our theoretical considerations lead to an estimate with an improved performance. Hence the development of statistical theory can indeed improve neural network estimates.


Full work available at URL: https://arxiv.org/abs/2107.09550







Cites Work


Cited In (1)





This page was built for publication: Convergence rates for shallow neural networks learned by gradient descent

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6137712)