Byzantine-robust loopless stochastic variance-reduced gradient
From MaRDI portal
Publication:6134045
DOI10.1007/978-3-031-35305-5_3zbMATH Open1528.90189arXiv2303.04560OpenAlexW4381956520MaRDI QIDQ6134045FDOQ6134045
Authors: Nikita Fedin, Eduard Gorbunov
Publication date: 21 August 2023
Published in: Mathematical Optimization Theory and Operations Research (Search for Journal in Brave)
Abstract: Distributed optimization with open collaboration is a popular field since it provides an opportunity for small groups/companies/universities, and individuals to jointly solve huge-scale problems. However, standard optimization algorithms are fragile in such settings due to the possible presence of so-called Byzantine workers -- participants that can send (intentionally or not) incorrect information instead of the one prescribed by the protocol (e.g., send anti-gradient instead of stochastic gradients). Thus, the problem of designing distributed methods with provable robustness to Byzantine workers has been receiving a lot of attention recently. In particular, several works consider a very promising way to achieve Byzantine tolerance via exploiting variance reduction and robust aggregation. The existing approaches use SAGA- and SARAH-type variance-reduced estimators, while another popular estimator -- SVRG -- is not studied in the context of Byzantine-robustness. In this work, we close this gap in the literature and propose a new method -- Byzantine-Robust Loopless Stochastic Variance Reduced Gradient (BR-LSVRG). We derive non-asymptotic convergence guarantees for the new method in the strongly convex case and compare its performance with existing approaches in numerical experiments.
Full work available at URL: https://arxiv.org/abs/2303.04560
Recommendations
- Genuinely distributed Byzantine machine learning
- Byzantine-robust variance-reduced federated learning over distributed non-i.i.d. data
- A simplified convergence theory for Byzantine resilient stochastic gradient descent
- Byzantine-robust distributed sparse learning for \(M\)-estimation
- Brief announcement: Byzantine-tolerant machine learning
Cites Work
- A Stochastic Approximation Method
- Deep learning
- Robust Stochastic Approximation Approach to Stochastic Programming
- Understanding machine learning. From theory to algorithms
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Title not available (Why is that?)
- The Byzantine Generals Problem
- Some methods of speeding up the convergence of iteration methods
- Title not available (Why is that?)
- Lectures on convex optimization
- Adaptivity of stochastic gradient methods for nonconvex optimization
- Brief announcement: Byzantine-tolerant machine learning
- Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks
Cited In (3)
This page was built for publication: Byzantine-robust loopless stochastic variance-reduced gradient
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6134045)