Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
From MaRDI portal
Publication:1670100
DOI10.1007/S10957-018-1298-1zbMATH Open1394.90464arXiv1705.04398OpenAlexW2615571142WikidataQ129831043 ScholiaQ129831043MaRDI QIDQ1670100FDOQ1670100
Authors: Adrien B. Taylor, Julien M. Hendrickx, François Glineur
Publication date: 4 September 2018
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Abstract: We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function whose proximal operator is available. We establish the exact worst-case convergence rates of the proximal gradient method in this setting for any step size and for different standard performance measures: objective function accuracy, distance to optimality and residual gradient norm. The proof methodology relies on recent developments in performance estimation of first-order methods based on semidefinite programming. In the case of the proximal gradient method, this methodology allows obtaining exact and non-asymptotic worst-case guarantees that are conceptually very simple, although apparently new. On the way, we discuss how strong convexity can be replaced by weaker assumptions, while preserving the corresponding convergence rates. We also establish that the same fixed step size policy is optimal for all three performance measures. Finally, we extend recent results on the worst-case behavior of gradient descent with exact line search to the proximal case.
Full work available at URL: https://arxiv.org/abs/1705.04398
Recommendations
- Convergence rate analysis of proximal gradient methods with applications to composite minimization problems
- On convergence rates of linearized proximal algorithms for convex composite optimization with applications
- Convergence rates of proximal gradient methods via the convex conjugate
- A note on the (accelerated) proximal gradient method for composite convex optimization
- Convergence rate of a proximal multiplier algorithm for separable convex minimization
- On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems
- Inexact proximal stochastic gradient method for convex composite optimization
- Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems
- Exact worst-case performance of first-order methods for composite convex optimization
- On convergence rates of proximal alternating direction method of multipliers
Cites Work
- Introductory lectures on convex optimization. A basic course.
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Gradient methods for minimizing composite functions
- Title not available (Why is that?)
- Proximal splitting methods in signal processing
- Title not available (Why is that?)
- First-order methods of smooth convex optimization with inexact oracle
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Performance of first-order methods for smooth convex minimization: a novel approach
- Optimized first-order methods for smooth convex minimization
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Convex optimization: algorithms and complexity
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
- Information-based complexity of linear operator equations
- The exact information-based complexity of smooth convex minimization
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Linear convergence of first order methods for non-strongly convex optimization
- A primer on monotone operator methods
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
Cited In (24)
- Online composite optimization with time-varying regularizers
- Forward-backward algorithm for functions with locally Lipschitz gradient: applications to mean field games
- Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions
- Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems
- An elementary approach to tight worst case complexity analysis of gradient based methods
- Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation
- A splitting method for the locality regularized semi-supervised subspace clustering
- Convergence rate analysis of proximal gradient methods with applications to composite minimization problems
- Tight ergodic sublinear convergence rate of the relaxed proximal point algorithm for monotone variational inequalities
- Interpolation conditions for linear operators and applications to performance estimation problems
- PEPIT: computer-assisted worst-case analyses of first-order optimization methods in python
- An Unrolled Implicit Regularization Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging with Convergence Guarantee
- Efficient first-order methods for convex minimization: a constructive approach
- Tight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion Problems
- A frequency-domain analysis of inexact gradient methods
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Adaptive restart of the optimized gradient method for convex optimization
- Analysis of optimization algorithms via sum-of-squares
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- A note on the optimal convergence rate of descent methods with fixed step sizes for smooth strongly convex functions
- Polynomial preconditioners for regularized linear inverse problems
- Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection
- Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods
- Principled analyses and design of first-order methods with inexact proximal operators
Uses Software
This page was built for publication: Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1670100)