Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions
From MaRDI portal
(Redirected from Publication:2020604)
Recommendations
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- Convergence rates of damped inertial dynamics under geometric conditions and perturbations
- Optimal convergence rates for Nesterov acceleration
- Convergence rate of inertial forward-backward algorithm beyond Nesterov's rule
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 3313108 (Why is no real title available?)
- scientific article; zbMATH DE number 3371284 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Activity identification and local linear convergence of forward-backward-type methods
- Adaptive restart for accelerated gradient schemes
- Adaptive restart of accelerated gradient methods under local quadratic growth condition
- Analysis and design of optimization algorithms via integral quadratic constraints
- Backtracking strategies for accelerated descent methods with smooth composite objectives
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- Convergence rate of inertial forward-backward algorithm beyond Nesterov's rule
- Convergence rates of inertial forward-backward algorithms
- Convergence to equilibrium for the backward Euler scheme and applications
- Error bounds and Hölder metric subregularity
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
- From error bounds to the complexity of first-order descent methods for convex functions
- Introductory lectures on convex optimization. A basic course.
- Linear convergence of first order methods for non-strongly convex optimization
- New Proximal Point Algorithms for Convex Minimization
- On semi- and subanalytic geometry
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- On the long time behavior of second order differential equations with asymptotically small dissipation
- On the proximal gradient algorithm with alternated inertia
- Optimal convergence rates for Nesterov acceleration
- Optimal methods of smooth convex minimization
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Some methods of speeding up the convergence of iteration methods
- Splitting methods with variable metric for Kurdyka-Łojasiewicz functions and general convergence rates
- The Differential Inclusion Modeling FISTA Algorithm and Optimality of Convergence Rate in the Case b $\leq3$
- The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\)
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Variational Analysis
Cited in
(13)- A class of modified accelerated proximal gradient methods for nonsmooth and nonconvex minimization problems
- Fast optimization via inertial dynamics with closed-loop damping
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- Optimal convergence rates for Nesterov acceleration
- A nonmonotone accelerated proximal gradient method with variable stepsize strategy for nonsmooth and nonconvex minimization problems
- Convergence rates of the heavy ball method for quasi-strongly convex optimization
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition
- Stochastic differential equations for modeling first order optimization methods
- Optimal convergence rates for damped inertial gradient dynamics with flat geometries
- Fast convergence of inertial dynamics with Hessian-driven damping under geometry assumptions
- Convergence rates of damped inertial dynamics under geometric conditions and perturbations
This page was built for publication: Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2020604)