Revisiting EXTRA for Smooth Distributed Optimization
From MaRDI portal
Publication:3300767
DOI10.1137/18M122902XzbMATH Open1447.90030arXiv2002.10110OpenAlexW3038991971MaRDI QIDQ3300767FDOQ3300767
Authors: Zhouchen Lin, Hu-An Li
Publication date: 30 July 2020
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Abstract: EXTRA is a popular method for dencentralized distributed optimization and has broad applications. This paper revisits EXTRA. First, we give a sharp complexity analysis for EXTRA with the improved communication and computation complexities for -strongly convex and -smooth problems, where is the second largest singular value of the weight matrix . When the strong convexity is absent, we prove the complexities. Then, we use the Catalyst framework to accelerate EXTRA and obtain the communication and computation complexities for strongly convex and smooth problems and the complexities for non-strongly convex ones. Our communication complexities of the accelerated EXTRA are only worse by the factors of and from the lower complexity bounds for strongly convex and non-strongly convex problems, respectively.
Full work available at URL: https://arxiv.org/abs/2002.10110
Recommendations
- Harnessing Smoothness to Accelerate Distributed Optimization
- ExtraPush for convex smooth decentralized optimization over directed networks
- Distributed Smooth Convex Optimization With Coupled Constraints
- Exact spectral-like gradient method for distributed optimization
- Gradient-free distributed optimization with exact convergence
- Random gradient extrapolation for distributed and stochastic optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Approximations in Distributed Optimization
- Distributed stochastic nonsmooth nonconvex optimization
- Exponentially Convergent Algorithm Design for Constrained Distributed Optimization via Nonsmooth Approach
Cites Work
- Introductory lectures on convex optimization. A basic course.
- On the convergence of decentralized gradient descent
- Distributed Subgradient Methods for Multi-Agent Optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Fast Distributed Gradient Methods
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed stochastic subgradient projection algorithms for convex optimization
- Distributed asynchronous computation of fixed points
- Asynchronous Broadcast-Based Convex Optimization Over a Network
- DSA: decentralized double stochastic averaging gradient algorithm
- Consensus-based distributed support vector machines
- Fastest Mixing Markov Chain on a Graph
- Optimal distributed online prediction using mini-batches
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Catalyst acceleration for first-order convex optimization: from theory to practice
- Communication-efficient algorithms for decentralized and stochastic optimization
- Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers
- Convergence Rate of Distributed ADMM Over Networks
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization
Cited In (16)
- Optimal gradient tracking for decentralized optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Random gradient extrapolation for distributed and stochastic optimization
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Recent theoretical advances in decentralized distributed convex optimization
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
- ADD-OPT: Accelerated Distributed Directed Optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- EFIX: exact fixed point methods for distributed optimization
- Projected subgradient based distributed convex optimization with transmission noises
- An accelerated decentralized stochastic optimization algorithm with inexact model
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- ExtraPush for convex smooth decentralized optimization over directed networks
- Towards accelerated rates for distributed optimization over time-varying networks
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
Uses Software
This page was built for publication: Revisiting EXTRA for Smooth Distributed Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3300767)