Convergence Rates of Proximal Gradient Methods via the Convex Conjugate
From MaRDI portal
Publication:4620416
DOI10.1137/18M1164329zbMath1410.90151arXiv1801.02509OpenAlexW2963169123WikidataQ128560189 ScholiaQ128560189MaRDI QIDQ4620416
David H. Gutman, Javier F. Peña
Publication date: 8 February 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1801.02509
Convex programming (90C25) Optimality conditions and duality in mathematical programming (90C46) Methods of reduced gradient type (90C52)
Related Items (2)
No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization ⋮ Perturbed Fenchel duality and first-order methods
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- Introductory lectures on convex optimization. A basic course.
- Convergence of first-order methods via the convex conjugate
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- An Optimal First Order Method Based on Optimal Quadratic Averaging
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
- Convergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz Continuity
This page was built for publication: Convergence Rates of Proximal Gradient Methods via the Convex Conjugate