Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
From MaRDI portal
Publication:1670100
DOI10.1007/s10957-018-1298-1zbMath1394.90464arXiv1705.04398OpenAlexW2615571142MaRDI QIDQ1670100
François Glineur, Adrien B. Taylor, Julien M. Hendrickx
Publication date: 4 September 2018
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1705.04398
Related Items (15)
A frequency-domain analysis of inexact gradient methods ⋮ Adaptive restart of the optimized gradient method for convex optimization ⋮ A note on the optimal convergence rate of descent methods with fixed step sizes for smooth strongly convex functions ⋮ An Unrolled Implicit Regularization Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging with Convergence Guarantee ⋮ Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods ⋮ A splitting method for the locality regularized semi-supervised subspace clustering ⋮ Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions ⋮ Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation ⋮ Efficient first-order methods for convex minimization: a constructive approach ⋮ Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection ⋮ Tight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion Problems ⋮ Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions ⋮ Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems ⋮ Analysis of optimization algorithms via sum-of-squares ⋮ New analysis of linear convergence of gradient-type methods via unifying error bound conditions
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimized first-order methods for smooth convex minimization
- Gradient methods for minimizing composite functions
- First-order methods of smooth convex optimization with inexact oracle
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- The exact information-based complexity of smooth convex minimization
- Information-based complexity of linear operator equations
- Introductory lectures on convex optimization. A basic course.
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Performance of first-order methods for smooth convex minimization: a novel approach
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Linear convergence of first order methods for non-strongly convex optimization
- Proximal Splitting Methods in Signal Processing
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
This page was built for publication: Exact worst-case convergence rates of the proximal gradient method for composite convex minimization