Gradient descent on infinitely wide neural networks: global convergence and generalization

From MaRDI portal
Publication:6200217

DOI10.4171/ICM2022/121arXiv2110.08084MaRDI QIDQ6200217FDOQ6200217

Lénaïc Chizat, Francis Bach

Publication date: 22 March 2024

Published in: International Congress of Mathematicians (Search for Journal in Brave)

Abstract: Many supervised machine learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many mathematical guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain. In this review paper, we consider two-layer neural networks with homogeneous activation functions where the number of hidden neurons tends to infinity, and show how qualitative convergence guarantees may be derived.


Full work available at URL: https://arxiv.org/abs/2110.08084







Cites Work


Cited In (2)





This page was built for publication: Gradient descent on infinitely wide neural networks: global convergence and generalization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6200217)