A distributed block coordinate descent method for training l₁ regularized linear classifiers
From MaRDI portal
Publication:4637005
Recommendations
- An efficient distributed learning algorithm based on effective local functional approximations
- Block coordinate descent algorithms for large-scale sparse multiclass classification
- Distributed coordinate descent method for learning with big data
- Block splitting for distributed optimization
- A comparison of optimization methods and software for large-scale L1-regularized linear classifi\-cation
Cites work
- scientific article; zbMATH DE number 6253925 (Why is no real title available?)
- scientific article; zbMATH DE number 3381785 (Why is no real title available?)
- A comparison of optimization methods and software for large-scale L1-regularized linear classifi\-cation
- A coordinate gradient descent method for \(\ell_{1}\)-regularized convex minimization
- A coordinate gradient descent method for nonsmooth separable minimization
- A reliable effective terascale linear learning system
- An improved GLMNET for L1-regularized logistic regression
- Block splitting for distributed optimization
- Cost Approximation: A Unified Framework of Descent Algorithms for Nonlinear Programs
- Decomposition methods for differentiable optimization problems over Cartesian product sets
- Distributed coordinate descent method for learning with big data
- Distributed iterative thresholding for ℓ0/ℓ1-regularized linear inverse problems
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Fast alternating direction optimization methods
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- On the global and linear convergence of the generalized alternating direction method of multipliers
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
Cited in
(12)- More communication-efficient distributed sparse learning
- Communication-Efficient Distributed Linear Discriminant Analysis for Binary Classification
- Block coordinate descent algorithms for large-scale sparse multiclass classification
- A distributed training method for L1 regularized kernel machines based on filtering mechanism
- A general distributed dual coordinate optimization framework for regularized loss minimization
- Distributed Newton Methods for Deep Neural Networks
- The distributed \({L_{1/2}}\) regularization
- scientific article; zbMATH DE number 6982986 (Why is no real title available?)
- A comparison of optimization methods and software for large-scale L1-regularized linear classifi\-cation
- Block splitting for distributed optimization
- scientific article; zbMATH DE number 7409363 (Why is no real title available?)
- An efficient distributed learning algorithm based on effective local functional approximations
This page was built for publication: A distributed block coordinate descent method for training \(l_1\) regularized linear classifiers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4637005)