Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent
DOI10.1137/16M1085905zbMath1353.65053OpenAlexW2547193634MaRDI QIDQ2832112
Peter Richtárik, Olivier Fercoq
Publication date: 7 November 2016
Published in: SIAM Review (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/16m1085905
complexitylinear programmingconvex optimizationsemidefinite programminggradient methodsaccelerationparallel methodsbig datapartial separabilityproximal methodsrandomized coordinate descentrandomized proximal gradient methodsseparable overapproximation
Numerical mathematical programming methods (65K05) Semidefinite programming (90C22) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Linear programming (90C05) Parallel numerical computation (65Y05)
Related Items (9)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Parallel coordinate descent methods for big data optimization
- Inexact coordinate descent: complexity and preconditioning
- On optimal probabilities in stochastic coordinate descent methods
- On the complexity analysis of randomized block-coordinate descent methods
- Smooth minimization of nonsmooth functions with parallel coordinate descent methods
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Coordinate descent algorithms for lasso penalized regression
- Lectures on Modern Convex Optimization
- An Efficient Inexact ABCD Method for Least Squares Semidefinite Programming
- Distributed Coordinate Descent Method for Learning with Big Data
- Coordinate descent with arbitrary sampling I: algorithms and complexity†
- Coordinate descent with arbitrary sampling II: expected separable overapproximation
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Nearly-Linear Time Positive LP Solver with Faster Convergence Rate
- Randomized Methods for Linear Constraints: Convergence Rates and Conditioning
- Semidefinite optimization
- Accelerated, Parallel, and Proximal Coordinate Descent
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Distributed Block Coordinate Descent for Minimizing Partially Separable Functions
- Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds
- Monotone Operators and the Proximal Point Algorithm
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Coordinate Descent Face-Off: Primal or Dual?
- Semidefinite Programming
- Safe Feature Elimination in Sparse Supervised Learning
- Learning with Submodular Functions: A Convex Optimization Perspective
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Understanding Machine Learning
- An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
- Handbook of semidefinite programming. Theory, algorithms, and applications
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent