Alternating minimization methods for strongly convex optimization
From MaRDI portal
Publication:2232092
DOI10.1515/JIIP-2020-0074zbMATH Open1478.90092arXiv1911.08987OpenAlexW3155282024MaRDI QIDQ2232092FDOQ2232092
Authors: Nazarii Tupitsa, Pavel Dvurechensky, Alexander V. Gasnikov, Sergey Guminov
Publication date: 4 October 2021
Published in: Journal of Inverse and Ill-posed Problems (Search for Journal in Brave)
Abstract: {We consider alternating minimization procedures for convex optimization problems with variable divided in many block, each block being amenable for minimization with respect to its variable with freezed other variables blocks. In the case of two blocks, we prove a linear convergence rate for alternating minimization procedure under Polyak-Lojasiewicz condition, which can be seen as a relaxation of the strong convexity assumption. Under strong convexity assumption in many-blocks setting we provide an accelerated alternating minimization procedure with linear rate depending on the square root of the condition number as opposed to condition number for the non-accelerated method. We also mention an approximating non-negative solution to a linear system of equations with alternating minimization of Kullback-Leibler (KL) divergence between and .
Full work available at URL: https://arxiv.org/abs/1911.08987
Recommendations
- On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
- On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems
- An alternate minimization method beyond positive definite proximal regularization: convergence and complexity
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- A class of alternating linearization algorithms for nonsmooth convex optimization
convex optimizationnonconvex optimizationcomplexity analysisalternating minimizationblock-coordinate method
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of ``Eckart-Young decomposition
- Title not available (Why is that?)
- Tensor Decompositions and Applications
- Title not available (Why is that?)
- Computational Methods for Inverse Problems
- Title not available (Why is that?)
- Iterative Solution of Nonlinear Equations in Several Variables
- On the Nonasymptotic Convergence of Cyclic Coordinate Descent Methods
- Iterative Bregman projections for regularized transportation problems
- A literature survey of low-rank tensor approximation techniques
- On the convergence of alternating minimization for convex programming with applications to iteratively reweighted least squares and decomposition schemes
- Title not available (Why is that?)
- Error bounds and convergence analysis of feasible descent methods: A general approach
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Cubic regularization of Newton method and its global performance
- A Statistical Model for Positron Emission Tomography
- Iteration complexity analysis of block coordinate descent methods
- Lectures on convex optimization
- Nonnegative tensor decomposition
- Fast primal-dual gradient method for strongly convex minimization problems with linear constraints
- Title not available (Why is that?)
- Convergence of an alternating maximization procedure
- Accelerated alternating descent methods for Dykstra-like problems
- An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints
- Mirror descent and convex optimization problems with non-smooth inequality constraints
- Title not available (Why is that?)
- Primal-dual accelerated gradient methods with small-dimensional relaxation oracle
- Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems
- Optimal combination of tensor optimization methods
- An adaptive proximal method for variational inequalities
- A stable alternative to Sinkhorn's algorithm for regularized optimal transport
- Numerical methods for the resource allocation problem in a computer network
- Composite optimization for the resource allocation problem
Cited In (13)
- Randomized methods for computing optimal transport without regularization and their convergence analysis
- On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
- Nested alternating minimization with FISTA for non-convex and non-smooth optimization problems
- Algorithms for Euclidean-regularised optimal transport
- Alternating minimization as sequential unconstrained minimization: a survey
- Alternating conditional gradient method for convex feasibility problems
- Alternating proximal gradient method for convex minimization
- Stochastic saddle-point optimization for the Wasserstein barycenter problem
- Recent theoretical advances in decentralized distributed convex optimization
- Stochastic approximation versus sample average approximation for Wasserstein barycenters
- Newton-type Alternating Minimization Algorithm for Convex Optimization
- Alternating minimization algorithm for minimizing the sum of strongly convex function and weakly convex function
- Recent Theoretical Advances in Non-Convex Optimization
This page was built for publication: Alternating minimization methods for strongly convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2232092)