Fast Distributed Gradient Methods
From MaRDI portal
Publication:2983161
Abstract: We study distributed optimization problems when nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant ), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications and the per-node gradient evaluations . Our first method, Distributed Nesterov Gradient, achieves rates and . Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of and -- the second largest singular value of the doubly stochastic weight matrix . It achieves rates and ( arbitrarily small). Further, we give with both methods explicit dependence of the convergence constants on and . Simulation examples illustrate our findings.
Cited in
(88)- Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
- Asymptotic convergence of a distributed weighted least squares algorithm for networked systems with vector node variables
- Communication-efficient algorithms for decentralized and stochastic optimization
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- Model aggregation for doubly divided data with large size and large dimension
- A distributed algorithm for efficiently solving linear equations and its applications (special issue JCW)
- Asynchronous algorithms for computing equilibrium prices in a capital asset pricing model
- Distributed minimal residual (DMR) method for acceleration of iterative algorithms
- GADMM: fast and communication efficient framework for distributed machine learning
- Tracking-ADMM for distributed constraint-coupled optimization
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- Distributed and consensus optimization for non-smooth image reconstruction
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- Distributed nonconvex constrained optimization over time-varying digraphs
- Fast Distributed Algorithms for Computing Separable Functions
- EFIX: exact fixed point methods for distributed optimization
- Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method
- A distributed algorithm for solving mixed equilibrium problems
- Improving the convergence of distributed gradient descent via inexact average consensus
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Harnessing Smoothness to Accelerate Distributed Optimization
- Composite optimization for the resource allocation problem
- On the linear convergence of two decentralized algorithms
- Optimal convergence rates for convex distributed optimization in networks
- Distributed variable sample-size gradient-response and best-response schemes for stochastic Nash equilibrium problems
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Continuous distributed algorithms for solving linear equations in finite time
- Computation of exact gradients in distributed dynamic systems
- Distributed stochastic gradient tracking methods
- Decentralized consensus algorithm with delayed and stochastic gradients
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- Exponential convergence of a distributed algorithm for solving linear algebraic equations
- Multi-cluster distributed optimization via random sleep strategy
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Decentralized gradient algorithm for solution of a linear equation
- Revisiting EXTRA for Smooth Distributed Optimization
- Reprint of ``A distributed algorithm for efficiently solving linear equations and its applications (Special issue JCW)
- Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
- Primal-dual \(\varepsilon\)-subgradient method for distributed optimization
- A multi-scale method for distributed convex optimization with constraints
- Distributed approximate Newton algorithms and weight design for constrained optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Differentially private distributed optimization for multi-agent systems via the augmented Lagrangian algorithm
- Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme
- Primal-dual algorithm for distributed constrained optimization
- A distributed conjugate gradient online learning method over networks
- Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs
- Exact spectral-like gradient method for distributed optimization
- Distributed gradient tracking methods with finite data rates
- An accelerated distributed gradient method with local memory
- Surplus-based accelerated algorithms for distributed optimization over directed networks
- Surrogate-based distributed optimisation for expensive black-box functions
- An Arrow-Hurwicz-Uzawa type flow as least squares solver for network linear equations
- Distributed learning for random vector functional-link networks
- Newton-like method with diagonal correction for distributed optimization
- On the convergence of decentralized gradient descent
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- Decentralized algorithms for distributed integer programming problems with a coupling cardinality constraint
- Distributed adaptive dynamic programming for data-driven optimal control
- Linear convergence of distributed estimation with constraints and communication delays
- An accelerated exact distributed first-order algorithm for optimization over directed networks
- Distributed multi-agent optimisation via coordination with second-order nearest neighbours
- Towards accelerated rates for distributed optimization over time-varying networks
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Distributed optimization with inexact oracle
- Momentum-based distributed gradient tracking algorithms for distributed aggregative optimization over unbalanced directed graphs
- Distributed accelerated gradient methods with restart under quadratic growth condition
- Optimal gradient tracking for decentralized optimization
- DIMIX: Diminishing Mixing for Sloppy Agents
- Distributed adaptive online learning for convex optimization with weight decay
- Distributed continuous-time accelerated neurodynamic approaches for sparse recovery via smooth approximation to \(L_1\)-minimization
- Distributed Newton Methods for Deep Neural Networks
- Distributed constrained optimization for multi-agent networks with communication delays under time-varying topologies
- Recent theoretical advances in decentralized distributed convex optimization
- Improving the Transient Times for Distributed Stochastic Gradient Methods
- Distributed primal-dual optimisation method with uncoordinated time-varying step-sizes
- A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
- Distributed convex optimization based on ADMM and belief propagation methods
- A generalized alternating direction implicit method for consensus optimization: application to distributed sparse logistic regression
- On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Decentralized online strongly convex optimization with general compressors and random disturbances
- An accelerated decentralized stochastic optimization algorithm with inexact model
- Linear convergence rate analysis of a class of exact first-order distributed methods for weight-balanced time-varying networks and uncoordinated step sizes
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
- Golden ratio proximal gradient ADMM for distributed composite convex optimization
This page was built for publication: Fast Distributed Gradient Methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2983161)