The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
From MaRDI portal
Publication:2673524
DOI10.1007/s11590-021-01821-1zbMath1493.90138arXiv2104.05468OpenAlexW3204743358MaRDI QIDQ2673524
Hadi Abbaszadehpeivasti, Moslem Zamani, Etienne de Klerk
Publication date: 10 June 2022
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.05468
Semidefinite programming (90C22) Nonconvex programming, global optimization (90C26) Methods of reduced gradient type (90C52)
Related Items
Conditions for linear convergence of the gradient method for non-convex optimization, Heterogeneous Mediation Analysis on Epigenomic PTSD and Traumatic Stress in a Predominantly African American Cohort, Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods, Conic linear optimization for computer-assisted proofs. Abstracts from the workshop held April 10--16, 2022
Uses Software
Cites Work
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Introductory lectures on convex optimization. A basic course.
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Differentiable functions on Banach spaces with Lipschitz derivatives
- Lower bounds for finding stationary points I
- Performance of first-order methods for smooth convex minimization: a novel approach
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Optimization Methods for Large-Scale Machine Learning
- Non-convex Optimization for Machine Learning
- Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization