Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks

From MaRDI portal
Publication:4615339

DOI10.1109/TIT.2018.2854560zbMATH Open1428.68255arXiv1707.04926OpenAlexW2963417959WikidataQ129563058 ScholiaQ129563058MaRDI QIDQ4615339FDOQ4615339

Jason D. Lee, A. Javanmard, Mahdi Soltanolkotabi

Publication date: 28 January 2019

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Abstract: In this paper we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the number of observations are fewer than the number of parameters in the model. We show that with quadratic activations the optimization landscape of training such shallow neural networks has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.


Full work available at URL: https://arxiv.org/abs/1707.04926







Cited In (36)





This page was built for publication: Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4615339)