A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization
From MaRDI portal
Publication:4637039
zbMath1435.68290arXiv1604.03763MaRDI QIDQ4637039
No author found.
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1604.03763
computational complexityaccelerationdistributed optimizationstochastic dual coordinate ascentregularized loss minimization
Related Items (4)
Communication-efficient distributed multi-task learning with matrix sparsity regularization ⋮ Inexact successive quadratic approximation for regularized optimization ⋮ Unnamed Item ⋮ Distributed block-diagonal approximation methods for regularized empirical risk minimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Smooth minimization of non-smooth functions
- Sparse inverse covariance estimation with the graphical lasso
- On the limited memory BFGS method for large scale optimization
- Introductory lectures on convex optimization. A basic course.
- Adaptive restart for accelerated gradient schemes
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization