Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks
From MaRDI portal
Publication:5103010
DOI10.1109/TSP.2020.3012952OpenAlexW3046449784MaRDI QIDQ5103010FDOQ5103010
Authors: Zhaoxian Wu, Qing Ling, Tianyi Chen, Georgios B. Giannakis
Publication date: 23 September 2022
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1912.12716
Recommendations
- Byzantine-robust loopless stochastic variance-reduced gradient
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- A simplified convergence theory for Byzantine resilient stochastic gradient descent
- Federated learning for minimizing nonsmooth convex loss functions
- Robust federated learning under statistical heterogeneity via hessian-weighted aggregation
- Stochastic distributed learning with gradient quantization and double-variance reduction
- Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence
Cited In (5)
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- Byzantine-robust loopless stochastic variance-reduced gradient
- Byzantine-robust variance-reduced federated learning over distributed non-i.i.d. data
- Federated learning for minimizing nonsmooth convex loss functions
- Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity
This page was built for publication: Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5103010)