The rate of convergence of optimization algorithms obtained via discretizations of heavy ball dynamical systems for convex optimization problems
From MaRDI portal
Publication:5054738
DOI10.1080/02331934.2021.1925896OpenAlexW3164461583MaRDI QIDQ5054738
Publication date: 29 November 2022
Published in: Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/02331934.2021.1925896
convex functiondynamical systemHessian-driven dampingunconstrained optimization problemsNesterov gradient method
Convex programming (90C25) Nonlinear programming (90C30) Numerical optimization and variational techniques (65K10) Iterative procedures involving nonlinear operators (47J25)
Cites Work
- Unnamed Item
- Unnamed Item
- Fast convex optimization via inertial dynamics with Hessian driven damping
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- A second-order gradient-like dissipative dynamical system with Hessian-driven damping. Application to optimization and mechanics.
- Introductory lectures on convex optimization. A basic course.
- Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity
- Tikhonov regularization of a second order dynamical system with Hessian driven damping
- An extension of the second order dynamical system that models Nesterov's convex gradient method
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
- Introduction to Nonlinear Optimization
- On the Long Time Behavior of Second Order Differential Equations with Asymptotically Small Dissipation
- On the long time behavior of second order differential equations with asymptotically small dissipation
- THE HEAVY BALL WITH FRICTION METHOD, I. THE CONTINUOUS DYNAMICAL SYSTEM: GLOBAL EXPLORATION OF THE LOCAL MINIMA OF A REAL-VALUED FUNCTION BY ASYMPTOTIC ANALYSIS OF A DISSIPATIVE DYNAMICAL SYSTEM
- Convergence Rates of Inertial Forward-Backward Algorithms
- An Inertial Newton Algorithm for Deep Learning
- A second-order dynamical approach with variable damping to nonconvex smooth minimization
- Some methods of speeding up the convergence of iteration methods
This page was built for publication: The rate of convergence of optimization algorithms obtained via discretizations of heavy ball dynamical systems for convex optimization problems