Exact worst-case convergence rates of the proximal gradient method for composite convex minimization

From MaRDI portal
Publication:1670100

DOI10.1007/S10957-018-1298-1zbMATH Open1394.90464arXiv1705.04398OpenAlexW2615571142WikidataQ129831043 ScholiaQ129831043MaRDI QIDQ1670100FDOQ1670100


Authors: Adrien B. Taylor, Julien M. Hendrickx, François Glineur Edit this on Wikidata


Publication date: 4 September 2018

Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)

Abstract: We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function whose proximal operator is available. We establish the exact worst-case convergence rates of the proximal gradient method in this setting for any step size and for different standard performance measures: objective function accuracy, distance to optimality and residual gradient norm. The proof methodology relies on recent developments in performance estimation of first-order methods based on semidefinite programming. In the case of the proximal gradient method, this methodology allows obtaining exact and non-asymptotic worst-case guarantees that are conceptually very simple, although apparently new. On the way, we discuss how strong convexity can be replaced by weaker assumptions, while preserving the corresponding convergence rates. We also establish that the same fixed step size policy is optimal for all three performance measures. Finally, we extend recent results on the worst-case behavior of gradient descent with exact line search to the proximal case.


Full work available at URL: https://arxiv.org/abs/1705.04398




Recommendations




Cites Work


Cited In (24)

Uses Software





This page was built for publication: Exact worst-case convergence rates of the proximal gradient method for composite convex minimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1670100)