A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
From MaRDI portal
Publication:2980337
Abstract: Based on the idea of randomized coordinate descent of -averaged operators, a randomized primal-dual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed by V~u and Condat that includes the well known ADMM as a particular case. The obtained algorithm is used to solve asynchronously a distributed optimization problem. A network of agents, each having a separate cost function containing a differentiable term, seek to find a consensus on the minimum of the aggregate objective. The method yields an algorithm where at each iteration, a random subset of agents wake up, update their local estimates, exchange some data with their neighbors, and go idle. Numerical results demonstrate the attractive performance of the method. The general approach can be naturally adapted to other situations where coordinate descent convex optimization algorithms are used with a random choice of the coordinates.
Recommendations
- A class of randomized primal-dual algorithms for distributed optimization
- DSCOVR: randomized primal-dual block coordinate algorithms for asynchronous distributed optimization
- Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
- A distributed asynchronous method of multipliers for constrained nonconvex optimization
- Primal-dual \(\varepsilon\)-subgradient method for distributed optimization
- Asynchronous Multiagent Primal-Dual Optimization
- Primal-dual algorithm for distributed constrained optimization
- Fast Distributed Algorithms Via Primal-Dual (Extended Abstract)
- An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
- Primal-dual stochastic distributed algorithm for constrained convex optimization
Cited in
(21)- Distributed composite optimization for multi-agent systems with asynchrony
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Asynchronous parallel primal-dual block coordinate update methods for affinely constrained convex programs
- Coordinate Descent Face-Off: Primal or Dual?
- Randomized progressive hedging methods for multi-stage stochastic programming
- A generic online acceleration scheme for optimization algorithms via relaxation and inertia
- Non-parallel bounded support matrix machine and its application in roller bearing fault diagnosis
- A fully stochastic primal-dual algorithm
- An inertial parallel and asynchronous forward-backward iteration for distributed convex optimization
- A coordinate-descent primal-dual algorithm with large step size and possibly nonseparable functions
- Proximal gradient methods with adaptive subspace sampling
- Primal-dual algorithms for multi-agent structured optimization over message-passing architectures with bounded communication delays
- Decentralized proximal splitting algorithms for composite constrained convex optimization
- A randomized incremental primal-dual method for decentralized consensus optimization
- An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
- A distributed conjugate gradient online learning method over networks
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- A class of randomized primal-dual algorithms for distributed optimization
- Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators
- A distributed asynchronous method of multipliers for constrained nonconvex optimization
- Distributed deterministic asynchronous algorithms in time-varying graphs through Dykstra splitting
This page was built for publication: A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2980337)