Fast Distributed Gradient Methods
From MaRDI portal
Publication:2983161
DOI10.1109/TAC.2014.2298712zbMATH Open1360.90292arXiv1112.2972MaRDI QIDQ2983161FDOQ2983161
Authors: Dušan Jakovetić, João Xavier, José M. F. Moura
Publication date: 16 May 2017
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: We study distributed optimization problems when nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant ), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications and the per-node gradient evaluations . Our first method, Distributed Nesterov Gradient, achieves rates and . Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of and -- the second largest singular value of the doubly stochastic weight matrix . It achieves rates and ( arbitrarily small). Further, we give with both methods explicit dependence of the convergence constants on and . Simulation examples illustrate our findings.
Full work available at URL: https://arxiv.org/abs/1112.2972
Deterministic network models in operations research (90B10) Distributed algorithms (68W15) Methods of reduced gradient type (90C52)
Cited In (88)
- An accelerated distributed gradient method with local memory
- Surplus-based accelerated algorithms for distributed optimization over directed networks
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Decentralized algorithms for distributed integer programming problems with a coupling cardinality constraint
- Asymptotic convergence of a distributed weighted least squares algorithm for networked systems with vector node variables
- Communication-efficient algorithms for decentralized and stochastic optimization
- Differentially private distributed optimization for multi-agent systems via the augmented Lagrangian algorithm
- A distributed conjugate gradient online learning method over networks
- A distributed algorithm for efficiently solving linear equations and its applications (special issue JCW)
- Distributed nonconvex constrained optimization over time-varying digraphs
- Decentralized gradient algorithm for solution of a linear equation
- An Arrow-Hurwicz-Uzawa type flow as least squares solver for network linear equations
- Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method
- Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs
- Newton-like method with diagonal correction for distributed optimization
- Model aggregation for doubly divided data with large size and large dimension
- GADMM: fast and communication efficient framework for distributed machine learning
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- Decentralized consensus algorithm with delayed and stochastic gradients
- Asynchronous algorithms for computing equilibrium prices in a capital asset pricing model
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Computation of exact gradients in distributed dynamic systems
- Distributed stochastic gradient tracking methods
- Reprint of ``A distributed algorithm for efficiently solving linear equations and its applications (Special issue JCW)
- On the linear convergence of two decentralized algorithms
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Distributed approximate Newton algorithms and weight design for constrained optimization
- A distributed algorithm for solving mixed equilibrium problems
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Multi-cluster distributed optimization via random sleep strategy
- Primal-dual algorithm for distributed constrained optimization
- Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme
- On the convergence of decentralized gradient descent
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- Exponential convergence of a distributed algorithm for solving linear algebraic equations
- Tracking-ADMM for distributed constraint-coupled optimization
- Improving the convergence of distributed gradient descent via inexact average consensus
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed gradient tracking methods with finite data rates
- Distributed adaptive dynamic programming for data-driven optimal control
- Distributed and consensus optimization for non-smooth image reconstruction
- Composite optimization for the resource allocation problem
- Optimal convergence rates for convex distributed optimization in networks
- Exact spectral-like gradient method for distributed optimization
- EFIX: exact fixed point methods for distributed optimization
- Revisiting EXTRA for Smooth Distributed Optimization
- Primal-dual \(\varepsilon\)-subgradient method for distributed optimization
- Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
- A multi-scale method for distributed convex optimization with constraints
- Distributed variable sample-size gradient-response and best-response schemes for stochastic Nash equilibrium problems
- Continuous distributed algorithms for solving linear equations in finite time
- Surrogate-based distributed optimisation for expensive black-box functions
- Distributed minimal residual (DMR) method for acceleration of iterative algorithms
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
- Fast Distributed Algorithms for Computing Separable Functions
- Distributed learning for random vector functional-link networks
- An accelerated exact distributed first-order algorithm for optimization over directed networks
- Distributed convex optimization based on ADMM and belief propagation methods
- Distributed adaptive online learning for convex optimization with weight decay
- Linear convergence of distributed estimation with constraints and communication delays
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- DIMIX: Diminishing Mixing for Sloppy Agents
- On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
- Distributed optimization with inexact oracle
- Distributed accelerated gradient methods with restart under quadratic growth condition
- Optimal gradient tracking for decentralized optimization
- Distributed multi-agent optimisation via coordination with second-order nearest neighbours
- Golden ratio proximal gradient ADMM for distributed composite convex optimization
- Distributed primal-dual optimisation method with uncoordinated time-varying step-sizes
- Distributed Newton Methods for Deep Neural Networks
- Recent theoretical advances in decentralized distributed convex optimization
- Improving the Transient Times for Distributed Stochastic Gradient Methods
- A generalized alternating direction implicit method for consensus optimization: application to distributed sparse logistic regression
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Decentralized online strongly convex optimization with general compressors and random disturbances
- An accelerated decentralized stochastic optimization algorithm with inexact model
- Distributed continuous-time accelerated neurodynamic approaches for sparse recovery via smooth approximation to \(L_1\)-minimization
- Distributed constrained optimization for multi-agent networks with communication delays under time-varying topologies
- Linear convergence rate analysis of a class of exact first-order distributed methods for weight-balanced time-varying networks and uncoordinated step sizes
- Momentum-based distributed gradient tracking algorithms for distributed aggregative optimization over unbalanced directed graphs
- Towards accelerated rates for distributed optimization over time-varying networks
- A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
This page was built for publication: Fast Distributed Gradient Methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2983161)