Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks
From MaRDI portal
Publication:5103010
Recommendations
- Byzantine-robust loopless stochastic variance-reduced gradient
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- A simplified convergence theory for Byzantine resilient stochastic gradient descent
- Federated learning for minimizing nonsmooth convex loss functions
- Robust federated learning under statistical heterogeneity via hessian-weighted aggregation
- Stochastic distributed learning with gradient quantization and double-variance reduction
- Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence
Cited in
(5)- Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- Byzantine-robust loopless stochastic variance-reduced gradient
- Byzantine-robust variance-reduced federated learning over distributed non-i.i.d. data
- Federated learning for minimizing nonsmooth convex loss functions
This page was built for publication: Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5103010)