Inertial Newton algorithms avoiding strict saddle points
From MaRDI portal
Publication:6145046
DOI10.1007/s10957-023-02330-0arXiv2111.04596MaRDI QIDQ6145046
Publication date: 8 January 2024
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2111.04596
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions
- Fast convex optimization via inertial dynamics with Hessian driven damping
- The gradient and heavy ball with friction dynamical systems: The quasiconvex case
- A second-order gradient-like dissipative dynamical system with Hessian-driven damping. Application to optimization and mechanics.
- Local convergence of the heavy-ball method and iPiano for non-convex optimization
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Continuous Newton-like inertial dynamics for monotone inclusions
- Understanding the acceleration phenomenon via high-resolution differential equations
- First-order optimization algorithms via inertial systems with Hessian driven damping
- A gradient-type algorithm with backward inertial steps associated to a nonconvex minimization problem
- Tikhonov regularization of a second order dynamical system with Hessian driven damping
- An extension of the second order dynamical system that models Nesterov's convex gradient method
- Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- A generalization of Hartman's linearization theorem
- Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
- Behavior of accelerated gradient methods near critical points of nonconvex functions
- The stable, center-stable, center, center-unstable, unstable manifolds
- A New Value Iteration method for the Average Cost Dynamic Programming Problem
- The Differential Inclusion Modeling FISTA Algorithm and Optimality of Convergence Rate in the Case b $\leq3$
- Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions
- A Dynamical Approach to an Inertial Forward-Backward Algorithm for Convex Minimization
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
- Newton-like Inertial Dynamics and Proximal Algorithms Governed by Maximally Monotone Operators
- An Inertial Newton Algorithm for Deep Learning
- Optimal Convergence Rates for Nesterov Acceleration
- A Lemma in the Theory of Structural Stability of Differential Equations
- Some methods of speeding up the convergence of iteration methods
- Limit Points of Sequences in Metric Spaces
- Fast optimization via inertial dynamics with closed-loop damping
This page was built for publication: Inertial Newton algorithms avoiding strict saddle points