Block Coordinate Descent on Smooth Manifolds: Convergence Theory and Twenty-One Examples
From MaRDI portal
Publication:6437751
arXiv2305.14744MaRDI QIDQ6437751FDOQ6437751
Publication date: 24 May 2023
Abstract: Block coordinate descent is an optimization paradigm that iteratively updates one block of variables at a time, making it quite amenable to big data applications due to its scalability and performance. Its convergence behavior has been extensively studied in the (block-wise) convex case, but it is much less explored in the non-convex case. In this paper we analyze the convergence of block coordinate methods on non-convex sets and derive convergence rates on smooth manifolds under natural or weaker assumptions than prior work. Our analysis applies to many non-convex problems (e.g., generalized PCA, optimal transport, matrix factorization, Burer-Monteiro factorization, outlier-robust estimation, alternating projection, maximal coding rate reduction, neural collapse, adversarial attacks, homomorphic sensing), either yielding novel corollaries or recovering previously known results.
This page was built for publication: Block Coordinate Descent on Smooth Manifolds: Convergence Theory and Twenty-One Examples
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6437751)