An accelerated communication-efficient primal-dual optimization framework for structured machine learning
From MaRDI portal
Publication:5859008
DOI10.1080/10556788.2019.1650361zbMath1464.90059arXiv1711.05305OpenAlexW2966396637WikidataQ127402834 ScholiaQ127402834MaRDI QIDQ5859008
Martin Takáč, Chenxin Ma, Martin Jaggi, Nathan Srebro, Frank E. Curtis
Publication date: 15 April 2021
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1711.05305
nonsmooth optimizationnonlinear optimizationmachine learningdistributed optimizationaccelerated methods
Related Items
Communication-efficient distributed multi-task learning with matrix sparsity regularization, Distributed Learning with Sparse Communications by Identification, Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- Communication-efficient distributed optimization of self-concordant empirical loss
- Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
- Accelerated, Parallel, and Proximal Coordinate Descent
- Algorithm 778: L-BFGS-B
- Distributed optimization with arbitrary local solvers
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization