On the Linear Convergence Rate of Generalized ADMM for Convex Composite Programming

From MaRDI portal
Publication:6401419

arXiv2206.03649MaRDI QIDQ6401419FDOQ6401419


Authors: Han Wang, Peili Li, Yunhai Xiao Edit this on Wikidata


Publication date: 7 June 2022

Abstract: Over the fast few years, the numerical success of the generalized alternating direction method of multipliers (GADMM) proposed by Eckstein & Bertsekas [Math. Prog., 1992] has inspired intensive attention in analyzing its theoretical convergence properties. In this paper, we devote to establishing the linear convergence rate of the semi-proximal GADMM (sPGADMM) for solving linearly constrained convex composite optimization problems. The semi-proximal terms contained in each subproblem possess the abilities of handling with multi-block problems efficiently. We initially present some important inequalities for the sequence generated by the sPGADMM, and then establish the local linear convergence rate under the assumption of calmness. As a by-product, the global convergence property is also discussed.













This page was built for publication: On the Linear Convergence Rate of Generalized ADMM for Convex Composite Programming

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6401419)