Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
From MaRDI portal
Publication:5375972
DOI10.1142/S0219530518500082zbMath1395.68331arXiv1709.00982OpenAlexW2963367716MaRDI QIDQ5375972
Publication date: 17 September 2018
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1709.00982
large-scale optimizationconvergence ratelinearly coupled constraintsrandomized block-coordinate descent
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Randomized algorithms (68W20)
Related Items
Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces, Convergence of online pairwise regression learning with quadratic loss, PhaseMax: Stable guarantees from noisy sub-Gaussian measurements, Accelerate stochastic subgradient method by leveraging local growth condition, Robust randomized optimization with k nearest neighbors
Cites Work
- Unnamed Item
- Unnamed Item
- Parallel coordinate descent methods for big data optimization
- On optimal probabilities in stochastic coordinate descent methods
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- On the complexity analysis of randomized block-coordinate descent methods
- Optimal scaling of a gradient method for distributed resource allocation
- Parallel multi-block ADMM with \(o(1/k)\) convergence
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Random block coordinate descent methods for linearly constrained optimization over networks
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Coordinate descent with arbitrary sampling I: algorithms and complexity†
- Coordinate descent with arbitrary sampling II: expected separable overapproximation
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- On Full Jacobian Decomposition of the Augmented Lagrangian Method for Separable Convex Programming
- Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds
- Regularization schemes for minimum error entropy principle
- Thresholded spectral algorithms for sparse approximations
- Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
- Convergence Rate Analysis for the Alternating Direction Method of Multipliers with a Substitution Procedure for Separable Convex Programming
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization