Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis

From MaRDI portal
Publication:2116020

DOI10.1007/S11228-020-00570-0zbMATH Open1487.90643arXiv1810.10051OpenAlexW3129000167MaRDI QIDQ2116020FDOQ2116020

Xiangfeng Wang, Jane J. Ye, Xiaoming Yuan, Jin Zhang, Shangzhi Zeng

Publication date: 15 March 2022

Published in: Set-Valued and Variational Analysis (Search for Journal in Brave)

Abstract: We develop new perturbation techniques for conducting convergence analysis of various first-order algorithms for a class of nonsmooth optimization problems. We consider the iteration scheme of an algorithm to construct a perturbed stationary point set-valued map, and define the perturbing parameter by the difference of two consecutive iterates. Then, we show that the calmness condition of the induced set-valued map, together with a local version of the proper separation of stationary value condition, is a sufficient condition to ensure the linear convergence of the algorithm. The equivalence of the calmness condition to the one for the canonically perturbed stationary point set-valued map is proved, and this equivalence allows us to derive some sufficient conditions for calmness by using some recent developments in variational analysis. These sufficient conditions are different from existing results (especially, those error-bound-based ones) in that they can be easily verified for many concrete application models. Our analysis is focused on the fundamental proximal gradient (PG) method, and it enables us to show that any accumulation of the sequence generated by the PG method must be a stationary point in terms of the proximal subdifferential, instead of the limiting subdifferential. This result finds the surprising fact that the solution quality found by the PG method is in general superior. Our analysis also leads to some improvement for the linear convergence results of the PG method in the convex case. The new perturbation technique can be conveniently used to derive linear rate convergence of a number of other first-order methods including the well-known alternating direction method of multipliers and primal-dual hybrid gradient method, under mild assumptions.


Full work available at URL: https://arxiv.org/abs/1810.10051




Recommendations




Cites Work


Cited In (5)





This page was built for publication: Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2116020)