Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
From MaRDI portal
Publication:1670100
Abstract: We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function whose proximal operator is available. We establish the exact worst-case convergence rates of the proximal gradient method in this setting for any step size and for different standard performance measures: objective function accuracy, distance to optimality and residual gradient norm. The proof methodology relies on recent developments in performance estimation of first-order methods based on semidefinite programming. In the case of the proximal gradient method, this methodology allows obtaining exact and non-asymptotic worst-case guarantees that are conceptually very simple, although apparently new. On the way, we discuss how strong convexity can be replaced by weaker assumptions, while preserving the corresponding convergence rates. We also establish that the same fixed step size policy is optimal for all three performance measures. Finally, we extend recent results on the worst-case behavior of gradient descent with exact line search to the proximal case.
Recommendations
- Convergence rate analysis of proximal gradient methods with applications to composite minimization problems
- On convergence rates of linearized proximal algorithms for convex composite optimization with applications
- Convergence rates of proximal gradient methods via the convex conjugate
- A note on the (accelerated) proximal gradient method for composite convex optimization
- Convergence rate of a proximal multiplier algorithm for separable convex minimization
- On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems
- Inexact proximal stochastic gradient method for convex composite optimization
- Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems
- Exact worst-case performance of first-order methods for composite convex optimization
- On convergence rates of proximal alternating direction method of multipliers
Cites work
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 4164577 (Why is no real title available?)
- A primer on monotone operator methods
- Analysis and design of optimization algorithms via integral quadratic constraints
- Convex optimization: algorithms and complexity
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Exact worst-case performance of first-order methods for composite convex optimization
- First-order methods of smooth convex optimization with inexact oracle
- Gradient methods for minimizing composite functions
- Information-based complexity of linear operator equations
- Introductory lectures on convex optimization. A basic course.
- Linear convergence of first order methods for non-strongly convex optimization
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Optimized first-order methods for smooth convex minimization
- Performance of first-order methods for smooth convex minimization: a novel approach
- Proximal splitting methods in signal processing
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- The exact information-based complexity of smooth convex minimization
Cited in
(29)- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Error bounds, quadratic growth, and linear convergence of proximal methods
- A note on the optimal convergence rate of descent methods with fixed step sizes for smooth strongly convex functions
- Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
- Online composite optimization with time-varying regularizers
- Convergence rate analysis of proximal gradient methods with applications to composite minimization problems
- Forward-backward algorithm for functions with locally Lipschitz gradient: applications to mean field games
- Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions
- Exact worst-case performance of first-order methods for composite convex optimization
- Interpolation conditions for linear operators and applications to performance estimation problems
- Principled analyses and design of first-order methods with inexact proximal operators
- Operator splitting performance estimation: tight contraction factors and optimal parameter selection
- Tight sublinear convergence rate of the proximal point algorithm for maximal monotone inclusion problems
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- An Unrolled Implicit Regularization Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging with Convergence Guarantee
- Polynomial preconditioners for regularized linear inverse problems
- Tight ergodic sublinear convergence rate of the relaxed proximal point algorithm for monotone variational inequalities
- Convergence rates of proximal gradient methods via the convex conjugate
- PEPIT: computer-assisted worst-case analyses of first-order optimization methods in python
- Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems
- A splitting method for the locality regularized semi-supervised subspace clustering
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
- Efficient first-order methods for convex minimization: a constructive approach
- An elementary approach to tight worst case complexity analysis of gradient based methods
- Adaptive restart of the optimized gradient method for convex optimization
- A frequency-domain analysis of inexact gradient methods
- Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- Analysis of optimization algorithms via sum-of-squares
This page was built for publication: Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1670100)