Pages that link to "Item:Q4638050"
From MaRDI portal
The following pages link to Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions (Q4638050):
Displaying 28 items.
- A geometric analysis of phase retrieval (Q1785008) (← links)
- On initial point selection of the steepest descent algorithm for general quadratic functions (Q2141353) (← links)
- Proximal methods avoid active strict saddles of weakly convex functions (Q2143222) (← links)
- Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions (Q2156337) (← links)
- A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions (Q2167333) (← links)
- Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance (Q2233558) (← links)
- Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments (Q2234294) (← links)
- Multiscale sparse microcanonical models (Q2319816) (← links)
- Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers (Q2425163) (← links)
- First-order methods almost always avoid strict saddle points (Q2425175) (← links)
- A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points (Q4620423) (← links)
- Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions (Q4638050) (← links)
- Mutation, Sexual Reproduction and Survival in Dynamic Environments (Q4638065) (← links)
- (Q4969223) (← links)
- Extending the Step-Size Restriction for Gradient Descent to Avoid Strict Saddle Points (Q5027014) (← links)
- On Gradient-Based Learning in Continuous Games (Q5027020) (← links)
- Analysis of Asymptotic Escape of Strict Saddle Sets in Manifold Optimization (Q5037575) (← links)
- (Q5074079) (← links)
- (Q5083087) (← links)
- Model-free Nonconvex Matrix Completion: Local Minima Analysis and Applications in Memory-efficient Kernel PCA (Q5214234) (← links)
- Convergence guarantees for a class of non-convex and non-smooth optimization problems (Q5214248) (← links)
- Null space gradient flows for constrained optimization with applications to shape optimization (Q5854382) (← links)
- Global convergence of the gradient method for functions definable in o-minimal structures (Q6052062) (← links)
- Polynomial‐time universality and limitations of deep learning (Q6074573) (← links)
- Statistical Inference with Local Optima (Q6077585) (← links)
- A geometric approach of gradient descent algorithms in linear neural networks (Q6099180) (← links)
- Sufficient Conditions for Instability of the Subgradient Method with Constant Step Size (Q6136655) (← links)
- Inertial Newton algorithms avoiding strict saddle points (Q6145046) (← links)