More communication-efficient distributed sparse learning
From MaRDI portal
Publication:6546022
Cites work
- scientific article; zbMATH DE number 6982986 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 6438182 (Why is no real title available?)
- A distributed block coordinate descent method for training \(l_1\) regularized linear classifiers
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Accelerated, parallel, and proximal coordinate descent
- Byzantine-robust variance-reduced federated learning over distributed non-i.i.d. data
- Classification of gene microarrays by penalized logistic regression
- Communication-efficient algorithms for decentralized and stochastic optimization
- Communication-efficient algorithms for statistical optimization
- Communication-efficient distributed statistical inference
- Communication-efficient sparse regression
- Communication-efficient surrogate quantile regression for non-randomly distributed system
- Convergence of a block coordinate descent method for nondifferentiable minimization
- Distributed coordinate descent method for learning with big data
- Distributed testing and estimation under sparse high dimensional models
- First-Order Newton-Type Estimator for Distributed Estimation and Inference
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Sign stochastic gradient descents without bounded gradient assumption for the finite sum minimization
This page was built for publication: More communication-efficient distributed sparse learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6546022)