Preconditioning meets biased compression for efficient distributed optimization
From MaRDI portal
Publication:6149587
Recommendations
- Stochastic gradient methods with preconditioned updates
- Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization
- Stochastic distributed learning with gradient quantization and double-variance reduction
- Communication-efficient distributed optimization of self-concordant empirical loss
- On data preconditioning for regularized loss minimization
Cites work
Cited in
(3)
This page was built for publication: Preconditioning meets biased compression for efficient distributed optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6149587)