Lagrangian-based methods in convex optimization: prediction-correction frameworks with non-ergodic convergence rates

From MaRDI portal
Publication:6432191

arXiv2304.02459MaRDI QIDQ6432191FDOQ6432191


Authors: Tao Zhang, Yong Xia, Shiru Li Edit this on Wikidata


Publication date: 5 April 2023

Abstract: Lagrangian-based methods are classical methods for solving convex optimization problems with equality constraints. We present novel prediction-correction frameworks for such methods and their variants, which can achieve O(1/k) non-ergodic convergence rates for general convex optimization and O(1/k2) non-ergodic convergence rates under the assumption that the objective function is strongly convex or gradient Lipschitz continuous. We give two approaches (updatingmultiplieronce ortwice) to design algorithms satisfying the presented prediction-correction frameworks. As applications, we establish non-ergodic convergence rates for some well-known Lagrangian-based methods (esp., the ADMM type methods and the multi-block ADMM type methods).













This page was built for publication: Lagrangian-based methods in convex optimization: prediction-correction frameworks with non-ergodic convergence rates

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6432191)