Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
From MaRDI portal
Publication:5375972
Abstract: The problem of minimizing a separable convex function under linearly coupled constraints arises from various application domains such as economic systems, distributed control, and network flow. The main challenge for solving this problem is that the size of data is very large, which makes usual gradient-based methods infeasible. Recently, Necoara, Nesterov, and Glineur [Journal of Optimization Theory and Applications, 173 (2017) 227-2254] proposed an efficient randomized coordinate descent method to solve this type of optimization problems and presented an appealing convergence analysis. In this paper, we develop new techniques to analyze the convergence of such algorithms, which are able to greatly improve the results presented there. This refined result is achieved by extending Nesterov's second technique developed by Nesterov [SIAM J. Optim. 22 (2012) 341-362] to the general optimization problems with linearly coupled constraints. A novel technique in our analysis is to establish the basis vectors for the subspace of the linearly constraints.
Recommendations
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Random block coordinate descent methods for linearly constrained optimization over networks
- On the complexity analysis of randomized block-coordinate descent methods
- An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints
Cites Work
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 6253925 (Why is no real title available?)
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Convergence Rate Analysis for the Alternating Direction Method of Multipliers with a Substitution Procedure for Separable Convex Programming
- Coordinate descent with arbitrary sampling. I: Algorithms and complexity.
- Coordinate descent with arbitrary sampling. II: Expected separable overapproximation.
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming
- On optimal probabilities in stochastic coordinate descent methods
- On the complexity analysis of randomized block-coordinate descent methods
- Optimal scaling of a gradient method for distributed resource allocation
- Parallel coordinate descent methods for big data optimization
- Parallel multi-block ADMM with \(o(1/k)\) convergence
- Parallel random coordinate descent method for composite minimization: convergence analysis and error bounds
- Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
- Random block coordinate descent methods for linearly constrained optimization over networks
- Regularization schemes for minimum error entropy principle
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- Thresholded spectral algorithms for sparse approximations
Cited In (9)
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization
- PhaseMax: stable guarantees from noisy sub-Gaussian measurements
- Robust randomized optimization with \(k\) nearest neighbors
- Convergence of online pairwise regression learning with quadratic loss
- Randomized sketch descent methods for non-separable linearly constrained optimization
- On the Global Convergence of Randomized Coordinate Gradient Descent for Nonconvex Optimization
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
- Accelerate stochastic subgradient method by leveraging local growth condition
This page was built for publication: Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5375972)