A mean field view of the landscape of two-layer neural networks

From MaRDI portal
Publication:4967449

DOI10.1073/PNAS.1806579115zbMATH Open1416.92014arXiv1804.06561OpenAlexW2963095610WikidataQ56610168 ScholiaQ56610168MaRDI QIDQ4967449FDOQ4967449


Authors:


Publication date: 3 July 2019

Published in: Proceedings of the National Academy of Sciences (Search for Journal in Brave)

Abstract: Multi-layer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires to optimize a non-convex high-dimensional objective (risk function), a problem which is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the first case, does this happen because local minima are absent, or because SGD somehow avoids them? In the second, why do local minima reached by SGD have good generalization properties? In this paper we consider a simple case, namely two-layers neural networks, and prove that -in a suitable scaling limit- SGD dynamics is captured by a certain non-linear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples, and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows to 'average-out' some of the complexities of the landscape of neural networks, and can be used to prove a general convergence result for noisy SGD.


Full work available at URL: https://arxiv.org/abs/1804.06561




Recommendations




Cited In (only showing first 100 items - show all)





This page was built for publication: A mean field view of the landscape of two-layer neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4967449)