Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
DOI10.1142/S0219530518500082zbMATH Open1395.68331arXiv1709.00982OpenAlexW2963367716MaRDI QIDQ5375972FDOQ5375972
Authors: Qin Fang, Min Xu, Yiming Ying
Publication date: 17 September 2018
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1709.00982
Recommendations
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Random block coordinate descent methods for linearly constrained optimization over networks
- On the complexity analysis of randomized block-coordinate descent methods
- An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints
convergence ratelarge-scale optimizationlinearly coupled constraintsrandomized block-coordinate descent
Convex programming (90C25) Analysis of algorithms and problem complexity (68Q25) Randomized algorithms (68W20)
Cites Work
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Title not available (Why is that?)
- Optimal scaling of a gradient method for distributed resource allocation
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Parallel coordinate descent methods for big data optimization
- Efficiency of coordinate descent methods on huge-scale optimization problems
- On optimal probabilities in stochastic coordinate descent methods
- Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- Title not available (Why is that?)
- On Full Jacobian Decomposition of the Augmented Lagrangian Method for Separable Convex Programming
- Parallel multi-block ADMM with \(o(1/k)\) convergence
- Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
- Coordinate descent with arbitrary sampling II: expected separable overapproximation
- On the complexity analysis of randomized block-coordinate descent methods
- Convergence Rate Analysis for the Alternating Direction Method of Multipliers with a Substitution Procedure for Separable Convex Programming
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Regularization schemes for minimum error entropy principle
- Random block coordinate descent methods for linearly constrained optimization over networks
- Coordinate descent with arbitrary sampling I: algorithms and complexity†
- Thresholded spectral algorithms for sparse approximations
Cited In (7)
- PhaseMax: Stable guarantees from noisy sub-Gaussian measurements
- Robust randomized optimization with k nearest neighbors
- Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization
- Convergence of online pairwise regression learning with quadratic loss
- On the Global Convergence of Randomized Coordinate Gradient Descent for Nonconvex Optimization
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
- Accelerate stochastic subgradient method by leveraging local growth condition
This page was built for publication: Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5375972)