On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
From MaRDI portal
Publication:2115326
DOI10.1007/S11590-021-01753-WzbMATH Open1487.90522arXiv1911.00404OpenAlexW3165796876MaRDI QIDQ2115326FDOQ2115326
Publication date: 15 March 2022
Published in: Optimization Letters (Search for Journal in Brave)
Abstract: In this paper, the convergence of alternating minimization is established for non-smooth convex optimization in Banach spaces, and novel rates of convergence are provided. As objective function a composition of a smooth and a non-smooth part is considered with the latter being block-separable, e.g., corresponding to convex constraints or regularization. For the smooth part, three different relaxations of strong convexity are considered: (i) quasi-strong convexity; (ii) quadratic functional growth; and (iii) plain convexity. Linear convergence is established for the first two cases, generalizing and improving previous results for strongly convex problems; sublinear convergence is established for the third case, also improving previous results from the literature. All the convergence results have in common, that opposing to previous corresponding results for the general block coordinate descent, the performance of the alternating minimization is beneficially governed by properties of the single blocks, instead of global properties. Ultimately, not only the better conditioned block determines the performance, as has been similarly observed in the literature. But also the worse conditioned problem enhances the performance additionally, resulting in potentially significantly improved convergence rates. Furthermore, by solely using the convexity and smoothness properties of the problem, the results immediately apply in general Banach spaces.
Full work available at URL: https://arxiv.org/abs/1911.00404
Recommendations
- A class of alternating linearization algorithms for nonsmooth convex optimization
- Alternating minimization methods for strongly convex optimization
- [[:Publication:4382546|Title not available (Why is that?)]]
- On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems
- Inertial proximal alternating minimization for nonconvex and nonsmooth problems
rate of convergenceconvex optimizationBanach spacessublinear convergencealternating minimizationlinear convergence
Cites Work
- Title not available (Why is that?)
- On the convergence of the block nonlinear Gauss-Seidel method under convex constraints
- Title not available (Why is that?)
- Title not available (Why is that?)
- Globally convergent block-coordinate techniques for unconstrained optimization
- Coordinate descent algorithms
- On the Convergence of Alternating Minimization for Convex Programming with Applications to Iteratively Reweighted Least Squares and Decomposition Schemes
- On the Convergence of Block Coordinate Descent Type Methods
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Rate of Convergence of Some Space Decomposition Methods for Linear and Nonlinear Problems
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Linear convergence of first order methods for non-strongly convex optimization
- DUNE — The Distributed and Unified Numerics Environment
Cited In (1)
Uses Software
This page was built for publication: On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2115326)