Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
From MaRDI portal
Publication:5280397
Abstract: In this paper we consider distributed optimization problems in which the cost function is separable, i.e., a sum of possibly non-smooth functions all sharing a common variable, and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose a class of distributed optimization algorithms based on proximal gradient methods applied to the dual problem. We show that, by choosing suitable primal variable copies, the dual problem is itself separable when written in terms of conjugate functions, and the dual variables can be stacked into non-overlapping blocks associated to the computing nodes. We first show that a weighted proximal gradient on the dual function leads to a synchronous distributed algorithm with local dual proximal gradient updates at each node. Then, as main paper contribution, we develop asynchronous versions of the algorithm in which the node updates are triggered by local timers without any global iteration counter. The algorithms are shown to be proper randomized block-coordinate proximal gradient updates on the dual function.
Cited in
(17)- An inertial parallel and asynchronous forward-backward iteration for distributed convex optimization
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- A distributed asynchronous method of multipliers for constrained nonconvex optimization
- An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization
- scientific article; zbMATH DE number 7307474 (Why is no real title available?)
- A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
- Asynchronous Distributed Optimization Over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence
- Decentralized proximal splitting algorithms for composite constrained convex optimization
- Asynchronous Distributed ADMM for Large-Scale Optimization—Part II: Linear Convergence Analysis and Numerical Performance
- A Distributed, Asynchronous, and Incremental Algorithm for Nonconvex Optimization: An ADMM Approach
- An asynchronous subgradient-proximal method for solving additive convex optimization problems
- A distributed proximal gradient method with time-varying delays for solving additive convex optimizations
- Distributed asynchronous incremental subgradient methods
- Distributed deterministic asynchronous algorithms in time-varying graphs through Dykstra splitting
- Distributed composite optimization for multi-agent systems with asynchrony
This page was built for publication: Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5280397)