A dual approach for optimal algorithms in distributed optimization over networks
DOI10.1080/10556788.2020.1750013zbMATH Open1464.90062arXiv1809.00710OpenAlexW3016897523MaRDI QIDQ5859014FDOQ5859014
Authors: César A. Uribe, Soomin Lee, Angelia Nedić, Alexander V. Gasnikov
Publication date: 15 April 2021
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1809.00710
Recommendations
- Optimal convergence rates for convex distributed optimization in networks
- On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Inexact dual averaging method for distributed multi-agent optimization
- Primal-dual algorithm for distributed constrained optimization
convex optimizationdistributed optimizationprimal-dual algorithmsoptimal ratesoptimization over networks
Convex programming (90C25) Programming involving graphs or networks (90C35) Nonlinear programming (90C30) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Title not available (Why is that?)
- Variational Analysis
- Smooth minimization of non-smooth functions
- Adaptive restart for accelerated gradient schemes
- Title not available (Why is that?)
- Gradient methods for minimizing composite functions
- Title not available (Why is that?)
- Primal recovery from consensus-based dual decomposition for distributed convex optimization
- First-order methods of smooth convex optimization with inexact oracle
- Application of a Smoothing Technique to Decomposition in Convex Optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- An <formula formulatype="inline"><tex Notation="TeX">$O(1/k)$</tex> </formula> Gradient Method for Network Resource Allocation Problems
- Optimal scaling of a gradient method for distributed resource allocation
- Reaching a Consensus
- Distributed Subgradient Methods for Multi-Agent Optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Title not available (Why is that?)
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- On Distributed Averaging Algorithms and Quantization Effects
- Distributed stochastic subgradient projection algorithms for convex optimization
- Double smoothing technique for large-scale linearly constrained convex optimization
- Universal gradient methods for convex optimization problems
- Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
- Block splitting for distributed optimization
- Gradient sliding for composite optimization
- Convex optimization: algorithms and complexity
- Efficient numerical methods for entropy-linear programming problems
- Distributed Optimization Over Time-Varying Directed Graphs
- Asymptotic agreement in distributed estimation
- Decentralized Resource Allocation in Dynamic Networks of Agents
- Large-scale machine learning with stochastic gradient descent
- Convergence and asymptotic agreement in distributed decision problems
- Randomized smoothing for stochastic optimization
- Optimization and Analysis of Distributed Averaging With Short Node Memory
- Distributed resource allocation on dynamic networks in quadratic time
- Analysis of accelerated gossip algorithms
- A fast dual proximal gradient algorithm for convex minimization and applications
- Fast primal-dual gradient method for strongly convex minimization problems with linear constraints
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
- Communication-efficient algorithms for decentralized and stochastic optimization
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints
- Optimal convergence rates for convex distributed optimization in networks
- A smoothed dual approach for variational Wasserstein problems
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Analysis of Max-Consensus Algorithms in Wireless Channels
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Optimal Distributed Convex Optimization on Slowly Time-Varying Graphs
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Accelerated Distributed Nesterov Gradient Descent
Cited In (19)
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- A Distributed Newton Method for Network Utility Maximization–I: Algorithm
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- Fixed Point Optimization Algorithms for Distributed Optimization in Networked Systems
- Decentralized convex optimization on time-varying networks with application to Wasserstein barycenters
- Distributed Optimization in Networked Systems
- Communication-Efficient Distributed Eigenspace Estimation
- Title not available (Why is that?)
- Hybrid online learning control in networked multiagent systems: A survey
- Title not available (Why is that?)
- Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
- Recent theoretical advances in decentralized distributed convex optimization
- Title not available (Why is that?)
- Optimal convergence rates for convex distributed optimization in networks
- A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks
- Optimal Methods for Convex Risk-Averse Distributed Optimization
- A divide-and-conquer algorithm for distributed optimization on networks
- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
Uses Software
This page was built for publication: A dual approach for optimal algorithms in distributed optimization over networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5859014)