The global optimization geometry of shallow linear neural networks
From MaRDI portal
Publication:1988338
DOI10.1007/s10851-019-00889-wzbMath1434.68534arXiv1805.04938MaRDI QIDQ1988338
Daniel Soudry, Michael B. Wakin, Zhihui Zhu, Yonina C. Eldar
Publication date: 23 April 2020
Published in: Journal of Mathematical Imaging and Vision (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1805.04938
68T07: Artificial neural networks and deep learning
90C90: Applications of mathematical programming
90C26: Nonconvex programming, global optimization
Related Items
Cites Work
- Unnamed Item
- Multilayer feedforward networks are universal approximators
- A geometric analysis of phase retrieval
- Exploiting negative curvature in deterministic and stochastic optimization
- First-order methods almost always avoid strict saddle points
- Cubic regularization of Newton method and its global performance
- Complete Dictionary Recovery Over the Sphere I: Overview and the Geometric Picture
- Accelerated Methods for NonConvex Optimization
- On the Estimation Performance and Convergence Rate of the Generalized Power Method for Phase Synchronization
- AMP-Inspired Deep Networks for Sparse Linear Inverse Problems
- Global Optimality in Low-Rank Matrix Optimization
- Finding approximate local minima faster than gradient descent
- Convolutional Phase Retrieval via Gradient Descent
- Nonconvex Robust Low-Rank Matrix Recovery
- Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex Matrix Factorization
- The non-convex geometry of low-rank matrix optimization
- Approximation by superpositions of a sigmoidal function
- A trust region method based on interior point techniques for nonlinear programming.