Revisiting EXTRA for Smooth Distributed Optimization
From MaRDI portal
Publication:3300767
Abstract: EXTRA is a popular method for dencentralized distributed optimization and has broad applications. This paper revisits EXTRA. First, we give a sharp complexity analysis for EXTRA with the improved communication and computation complexities for -strongly convex and -smooth problems, where is the second largest singular value of the weight matrix . When the strong convexity is absent, we prove the complexities. Then, we use the Catalyst framework to accelerate EXTRA and obtain the communication and computation complexities for strongly convex and smooth problems and the complexities for non-strongly convex ones. Our communication complexities of the accelerated EXTRA are only worse by the factors of and from the lower complexity bounds for strongly convex and non-strongly convex problems, respectively.
Recommendations
- Harnessing Smoothness to Accelerate Distributed Optimization
- ExtraPush for convex smooth decentralized optimization over directed networks
- Distributed Smooth Convex Optimization With Coupled Constraints
- Exact spectral-like gradient method for distributed optimization
- Gradient-free distributed optimization with exact convergence
- Random gradient extrapolation for distributed and stochastic optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Approximations in Distributed Optimization
- Distributed stochastic nonsmooth nonconvex optimization
- Exponentially Convergent Algorithm Design for Constrained Distributed Optimization via Nonsmooth Approach
Cites work
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Asynchronous Broadcast-Based Convex Optimization Over a Network
- Catalyst acceleration for first-order convex optimization: from theory to practice
- Communication-efficient algorithms for decentralized and stochastic optimization
- Consensus-based distributed support vector machines
- Convergence Rate of Distributed ADMM Over Networks
- DSA: decentralized double stochastic averaging gradient algorithm
- Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed asynchronous computation of fixed points
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed stochastic subgradient projection algorithms for convex optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers
- Fast Distributed Gradient Methods
- Fastest Mixing Markov Chain on a Graph
- Harnessing Smoothness to Accelerate Distributed Optimization
- Introductory lectures on convex optimization. A basic course.
- On the convergence of decentralized gradient descent
- Optimal distributed online prediction using mini-batches
Cited in
(16)- Towards accelerated rates for distributed optimization over time-varying networks
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Optimal gradient tracking for decentralized optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Random gradient extrapolation for distributed and stochastic optimization
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Recent theoretical advances in decentralized distributed convex optimization
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
- ADD-OPT: Accelerated Distributed Directed Optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- EFIX: exact fixed point methods for distributed optimization
- Projected subgradient based distributed convex optimization with transmission noises
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms
- An accelerated decentralized stochastic optimization algorithm with inexact model
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- ExtraPush for convex smooth decentralized optimization over directed networks
This page was built for publication: Revisiting EXTRA for Smooth Distributed Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3300767)