A distributed block coordinate descent method for training l₁ regularized linear classifiers
From MaRDI portal
Publication:4637005
zbMATH Open1435.68274arXiv1405.4544MaRDI QIDQ4637005FDOQ4637005
Dhruv Mahajan, Author name not available (Why is that?), S. S. Keerthi
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1405.4544
Recommendations
- An efficient distributed learning algorithm based on effective local functional approximations
- Block coordinate descent algorithms for large-scale sparse multiclass classification
- Distributed coordinate descent method for learning with big data
- Block splitting for distributed optimization
- A comparison of optimization methods and software for large-scale L1-regularized linear classifi\-cation
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Distributed algorithms (68W15)
Cites Work
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- On the global and linear convergence of the generalized alternating direction method of multipliers
- A coordinate gradient descent method for nonsmooth separable minimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Fast Alternating Direction Optimization Methods
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Distributed coordinate descent method for learning with big data
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- Title not available (Why is that?)
- A comparison of optimization methods and software for large-scale L1-regularized linear classifi\-cation
- Cost Approximation: A Unified Framework of Descent Algorithms for Nonlinear Programs
- A coordinate gradient descent method for \(\ell_{1}\)-regularized convex minimization
- Block splitting for distributed optimization
- Decomposition methods for differentiable optimization problems over Cartesian product sets
- Title not available (Why is that?)
- Distributed iterative thresholding for ℓ0/ℓ1-regularized linear inverse problems
Cited In (6)
- Communication-Efficient Distributed Linear Discriminant Analysis for Binary Classification
- A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization
- Title not available (Why is that?)
- Distributed Newton Methods for Deep Neural Networks
- Title not available (Why is that?)
- More communication-efficient distributed sparse learning
Uses Software
This page was built for publication: A distributed block coordinate descent method for training \(l_1\) regularized linear classifiers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4637005)