A dual approach for optimal algorithms in distributed optimization over networks
From MaRDI portal
Publication:5859014
Abstract: We study dual-based algorithms for distributed convex optimization problems over networks, where the objective is to minimize a sum of functions over in a network. We provide complexity bounds for four different cases, namely: each function is strongly convex and smooth, each function is either strongly convex or smooth, and when it is convex but neither strongly convex nor smooth. Our approach is based on the dual of an appropriately formulated primal problem, which includes a graph that models the communication restrictions. We propose distributed algorithms that achieve the same optimal rates as their centralized counterparts (up to constant and logarithmic factors), with an additional optimal cost related to the spectral properties of the network. Initially, we focus on functions for which we can explicitly minimize its Legendre-Fenchel conjugate, i.e., admissible or dual friendly functions. Then, we study distributed optimization algorithms for non-dual friendly functions, as well as a method to improve the dependency on the parameters of the functions involved. Numerical analysis of the proposed algorithms is also provided.
Recommendations
- Optimal convergence rates for convex distributed optimization in networks
- On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Inexact dual averaging method for distributed multi-agent optimization
- Primal-dual algorithm for distributed constrained optimization
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 729680 (Why is no real title available?)
- scientific article; zbMATH DE number 2121575 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A fast dual proximal gradient algorithm for convex minimization and applications
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- A smoothed dual approach for variational Wasserstein problems
- Accelerated Distributed Nesterov Gradient Descent
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Adaptive restart for accelerated gradient schemes
- An <formula formulatype="inline"><tex Notation="TeX">$O(1/k)$</tex> </formula> Gradient Method for Network Resource Allocation Problems
- Analysis of Max-Consensus Algorithms in Wireless Channels
- Analysis of accelerated gossip algorithms
- Application of a Smoothing Technique to Decomposition in Convex Optimization
- Asymptotic agreement in distributed estimation
- Block splitting for distributed optimization
- Communication-efficient algorithms for decentralized and stochastic optimization
- Convergence and asymptotic agreement in distributed decision problems
- Convex optimization: algorithms and complexity
- Decentralized Resource Allocation in Dynamic Networks of Agents
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Distributed Optimization Over Time-Varying Directed Graphs
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Distributed resource allocation on dynamic networks in quadratic time
- Distributed stochastic subgradient projection algorithms for convex optimization
- Double smoothing technique for large-scale linearly constrained convex optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Efficient numerical methods for entropy-linear programming problems
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Fast primal-dual gradient method for strongly convex minimization problems with linear constraints
- First-order methods of smooth convex optimization with inexact oracle
- Gradient methods for minimizing composite functions
- Gradient sliding for composite optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Large-scale machine learning with stochastic gradient descent
- On Distributed Averaging Algorithms and Quantization Effects
- On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
- Optimal Distributed Convex Optimization on Slowly Time-Varying Graphs
- Optimal convergence rates for convex distributed optimization in networks
- Optimal scaling of a gradient method for distributed resource allocation
- Optimization and Analysis of Distributed Averaging With Short Node Memory
- Primal recovery from consensus-based dual decomposition for distributed convex optimization
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
- Randomized smoothing for stochastic optimization
- Reaching a Consensus
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Smooth minimization of non-smooth functions
- Universal gradient methods for convex optimization problems
- Variational Analysis
Cited in
(19)- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- scientific article; zbMATH DE number 1054678 (Why is no real title available?)
- Decentralized convex optimization on time-varying networks with application to Wasserstein barycenters
- scientific article; zbMATH DE number 4081262 (Why is no real title available?)
- A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks
- Optimal Methods for Convex Risk-Averse Distributed Optimization
- Optimal convergence rates for convex distributed optimization in networks
- scientific article; zbMATH DE number 4199994 (Why is no real title available?)
- Communication-Efficient Distributed Eigenspace Estimation
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- A Distributed Newton Method for Network Utility Maximization–I: Algorithm
- Hybrid online learning control in networked multiagent systems: A survey
- Fixed Point Optimization Algorithms for Distributed Optimization in Networked Systems
- Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
- Recent theoretical advances in decentralized distributed convex optimization
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- Distributed Optimization in Networked Systems
- A divide-and-conquer algorithm for distributed optimization on networks
This page was built for publication: A dual approach for optimal algorithms in distributed optimization over networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5859014)