Fast Distributed Gradient Methods
From MaRDI portal
Publication:2983161
Abstract: We study distributed optimization problems when nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant ), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications and the per-node gradient evaluations . Our first method, Distributed Nesterov Gradient, achieves rates and . Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of and -- the second largest singular value of the doubly stochastic weight matrix . It achieves rates and ( arbitrarily small). Further, we give with both methods explicit dependence of the convergence constants on and . Simulation examples illustrate our findings.
Cited in
(88)- Towards accelerated rates for distributed optimization over time-varying networks
- A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
- An accelerated distributed gradient method with local memory
- Surplus-based accelerated algorithms for distributed optimization over directed networks
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Decentralized algorithms for distributed integer programming problems with a coupling cardinality constraint
- An accelerated exact distributed first-order algorithm for optimization over directed networks
- Asymptotic convergence of a distributed weighted least squares algorithm for networked systems with vector node variables
- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed convex optimization based on ADMM and belief propagation methods
- A distributed algorithm for efficiently solving linear equations and its applications (special issue JCW)
- A distributed conjugate gradient online learning method over networks
- Differentially private distributed optimization for multi-agent systems via the augmented Lagrangian algorithm
- Decentralized gradient algorithm for solution of a linear equation
- Distributed nonconvex constrained optimization over time-varying digraphs
- An Arrow-Hurwicz-Uzawa type flow as least squares solver for network linear equations
- Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method
- Distributed adaptive online learning for convex optimization with weight decay
- Linear convergence of distributed estimation with constraints and communication delays
- Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs
- Newton-like method with diagonal correction for distributed optimization
- GADMM: fast and communication efficient framework for distributed machine learning
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Model aggregation for doubly divided data with large size and large dimension
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- DIMIX: Diminishing Mixing for Sloppy Agents
- On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- Decentralized consensus algorithm with delayed and stochastic gradients
- Distributed optimization with inexact oracle
- Distributed stochastic gradient tracking methods
- Asynchronous algorithms for computing equilibrium prices in a capital asset pricing model
- Computation of exact gradients in distributed dynamic systems
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Reprint of ``A distributed algorithm for efficiently solving linear equations and its applications (Special issue JCW)
- Distributed accelerated gradient methods with restart under quadratic growth condition
- Optimal gradient tracking for decentralized optimization
- On the linear convergence of two decentralized algorithms
- Distributed approximate Newton algorithms and weight design for constrained optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- A distributed algorithm for solving mixed equilibrium problems
- Distributed multi-agent optimisation via coordination with second-order nearest neighbours
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Multi-cluster distributed optimization via random sleep strategy
- Primal-dual algorithm for distributed constrained optimization
- Golden ratio proximal gradient ADMM for distributed composite convex optimization
- Distributed primal-dual optimisation method with uncoordinated time-varying step-sizes
- Distributed Newton Methods for Deep Neural Networks
- Recent theoretical advances in decentralized distributed convex optimization
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
- Improving the Transient Times for Distributed Stochastic Gradient Methods
- A generalized alternating direction implicit method for consensus optimization: application to distributed sparse logistic regression
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme
- On the convergence of decentralized gradient descent
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- Exponential convergence of a distributed algorithm for solving linear algebraic equations
- Tracking-ADMM for distributed constraint-coupled optimization
- Improving the convergence of distributed gradient descent via inexact average consensus
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Distributed gradient tracking methods with finite data rates
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed adaptive dynamic programming for data-driven optimal control
- Distributed and consensus optimization for non-smooth image reconstruction
- Exact spectral-like gradient method for distributed optimization
- Composite optimization for the resource allocation problem
- Optimal convergence rates for convex distributed optimization in networks
- EFIX: exact fixed point methods for distributed optimization
- Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
- A multi-scale method for distributed convex optimization with constraints
- Revisiting EXTRA for Smooth Distributed Optimization
- Primal-dual \(\varepsilon\)-subgradient method for distributed optimization
- Continuous distributed algorithms for solving linear equations in finite time
- Decentralized online strongly convex optimization with general compressors and random disturbances
- Distributed minimal residual (DMR) method for acceleration of iterative algorithms
- Distributed variable sample-size gradient-response and best-response schemes for stochastic Nash equilibrium problems
- Surrogate-based distributed optimisation for expensive black-box functions
- An accelerated decentralized stochastic optimization algorithm with inexact model
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
- Distributed continuous-time accelerated neurodynamic approaches for sparse recovery via smooth approximation to \(L_1\)-minimization
- Distributed constrained optimization for multi-agent networks with communication delays under time-varying topologies
- Linear convergence rate analysis of a class of exact first-order distributed methods for weight-balanced time-varying networks and uncoordinated step sizes
- Momentum-based distributed gradient tracking algorithms for distributed aggregative optimization over unbalanced directed graphs
- Distributed learning for random vector functional-link networks
- Fast Distributed Algorithms for Computing Separable Functions
This page was built for publication: Fast Distributed Gradient Methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2983161)