On the Convergence of Decentralized Gradient Descent

From MaRDI portal
Publication:2821798


DOI10.1137/130943170zbMath1345.90068arXiv1310.7063MaRDI QIDQ2821798

Kun Yuan, Wotao Yin, Qing Ling

Publication date: 23 September 2016

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1310.7063


65K05: Numerical mathematical programming methods

90C25: Convex programming

90C30: Nonlinear programming


Related Items

Decentralized Consensus Algorithm with Delayed and Stochastic Gradients, Unnamed Item, Unnamed Item, Unnamed Item, A Kaczmarz Algorithm for Solving Tree Based Distributed Systems of Equations, Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction, On the Divergence of Decentralized Nonconvex Optimization, Distributed smooth optimisation with event-triggered proportional-integral algorithms, Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation, Efficient and Reliable Overlay Networks for Decentralized Federated Learning, Distributed Algorithms with Finite Data Rates that Solve Linear Equations, Second-Order Guarantees of Distributed Gradient Algorithms, EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization, Newton-like Method with Diagonal Correction for Distributed Optimization, Adaptive online distributed optimization in dynamic environments, Discussion of the paper ‘A review of distributed statistical inference’, An accelerated exact distributed first-order algorithm for optimization over directed networks, High-dimensional \(M\)-estimation for Byzantine-robust decentralized learning, A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes, A divide-and-conquer algorithm for distributed optimization on networks, A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities, Neurodynamic approaches for multi-agent distributed optimization, Learning Coefficient Heterogeneity over Networks: A Distributed Spanning-Tree-Based Fused-Lasso Regression, Understanding a Class of Decentralized and Federated Optimization Algorithms: A Multirate Feedback Control Perspective, DIMIX: Diminishing Mixing for Sloppy Agents, Dynamics based privacy preservation in decentralized optimization, A variance-reduced stochastic gradient tracking algorithm for decentralized optimization with orthogonality constraints, Distributed optimal frequency control under communication packet loss in multi-agent electric energy systems, Event-triggered primal-dual design with linear convergence for distributed nonstrongly convex optimization, Network Gradient Descent Algorithm for Decentralized Federated Learning, Golden ratio proximal gradient ADMM for distributed composite convex optimization, Using Witten Laplacians to Locate Index-1 Saddle Points, Recent theoretical advances in decentralized distributed convex optimization, Online learning over a decentralized network through ADMM, Distributed consensus-based multi-agent convex optimization via gradient tracking technique, On the linear convergence of two decentralized algorithms, Projected subgradient based distributed convex optimization with transmission noises, Convergence results of a nested decentralized gradient method for non-strongly convex problems, Blended dynamics approach to distributed optimization: sum convexity and convergence rate, A unitary distributed subgradient method for multi-agent optimization with different coupling sources, Distributed constrained optimization for multi-agent systems over a directed graph with piecewise stepsize, Correction-based diffusion LMS algorithms for distributed estimation, Distributed algorithms for computing a fixed point of multi-agent nonexpansive operators, Primal-dual stochastic distributed algorithm for constrained convex optimization, Subgradient averaging for multi-agent optimisation with different constraint sets, A distributed methodology for approximate uniform global minimum sharing, Differentially private distributed optimization for multi-agent systems via the augmented Lagrangian algorithm, EFIX: exact fixed point methods for distributed optimization, Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization, ARock: An Algorithmic Framework for Asynchronous Parallel Coordinate Updates, Revisiting EXTRA for Smooth Distributed Optimization



Cites Work