Convergence rates for shallow neural networks learned by gradient descent
From MaRDI portal
Publication:6137712
DOI10.3150/23-BEJ1605arXiv2107.09550OpenAlexW4388506988MaRDI QIDQ6137712FDOQ6137712
Authors: Alina Braun, Michael Kohler, Sophie Langer, Harro Walk
Publication date: 16 January 2024
Published in: Bernoulli (Search for Journal in Brave)
Abstract: In this paper we analyze the error of neural network regression estimates with one hidden layer. Under the assumption that the Fourier transform of the regression function decays suitably fast, we show that an estimate, where all initial weights are chosen according to proper uniform distributions and where the weights are learned by gradient descent, achieves a rate of convergence of (up to a logarithmic factor). Our statistical analysis implies that the key aspect behind this result is the proper choice of the initial inner weights and the adjustment of the outer weights via gradient descent. This indicates that we can also simply use linear least squares to choose the outer weights. We prove a corresponding theoretical result and compare our new linear least squares neural network estimate with standard neural network estimates via simulated data. Our simulations show that our theoretical considerations lead to an estimate with an improved performance. Hence the development of statistical theory can indeed improve neural network estimates.
Full work available at URL: https://arxiv.org/abs/2107.09550
Numerical optimization and variational techniques (65K10) Artificial neural networks and deep learning (68T07)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- Optimal global rates of convergence for nonparametric regression
- Title not available (Why is that?)
- A distribution-free theory of nonparametric regression
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Title not available (Why is that?)
- Introduction to the mathematics of medical imaging
- Title not available (Why is that?)
- Geometric Upper Bounds on Rates of Variable-Basis Approximation
- Approximation and estimation bounds for artificial neural networks
- Title not available (Why is that?)
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Approximation bounds for random neural networks and reservoir systems
- Nonparametric Regression Based on Hierarchical Interaction Models
- Over-parametrized deep neural networks minimizing the empirical risk do not generalize well
- Nonparametric regression using deep neural networks with ReLU activation function
- Analysis of a two-layer neural network via displacement convexity
- Gradient descent optimizes over-parameterized deep ReLU networks
- On the rate of convergence of fully connected deep neural network regression estimates
- Theoretical issues in deep networks
- Estimation of a Function of Low Local Dimensionality by Deep Neural Networks
- Neural tangent kernel: convergence and generalization in neural networks (invited paper)
Cited In (1)
This page was built for publication: Convergence rates for shallow neural networks learned by gradient descent
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6137712)