Linear convergence of forward-backward accelerated algorithms without knowledge of the modulus of strong convexity
DOI10.1137/23M158111XMaRDI QIDQ6561383FDOQ6561383
Bin Shi, Yaxiang Yuan, Bo-wen Li
Publication date: 25 June 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Lyapunov functionphase-space representation\texttt{NAG}\(\mu\)-strongly convex function\texttt{FISTA}
Convex programming (90C25) Ridge regression; shrinkage estimators (Lasso) (62J07) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical aspects of computer graphics, image analysis, and computational geometry (65D18)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Dynamical Approach to an Inertial Forward-Backward Algorithm for Convex Minimization
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- A second-order differential system with Hessian-driven damping; application to non-elastic shock laws
- Fast convex optimization via inertial dynamics with Hessian driven damping
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- An introduction to continuous optimization for imaging
- The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\)
- A variational perspective on accelerated methods in optimization
- From differential equation solvers to accelerated first-order methods for convex optimization
- Understanding the acceleration phenomenon via high-resolution differential equations
- Title not available (Why is that?)
This page was built for publication: Linear convergence of forward-backward accelerated algorithms without knowledge of the modulus of strong convexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6561383)