On decentralized nonsmooth optimization
From MaRDI portal
Publication:6134043
Abstract: In decentralized optimization, several nodes connected by a network collaboratively minimize some objective function. For minimization of Lipschitz functions lower bounds are known along with optimal algorithms. We study a specific class of problems: linear models with nonsmooth loss functions. Our algorithm combines regularization and dual reformulation to get an effective optimization method with complexity better than the lower bounds.
Recommendations
- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed stochastic nonsmooth nonconvex optimization
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Optimal convergence rates for convex distributed optimization in networks
- Decentralized Strongly-Convex Optimization with Affine Constraints: Primal and Dual Approaches
Cites work
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- A stable alternative to Sinkhorn's algorithm for regularized optimal transport
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Estimating point-to-point and point-to-multipoint traffic matrices: an information-theoretic approach.
Cited in
(12)- A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
- Decentralized Gradient Descent Maximization Method for Composite Nonconvex Strongly-Concave Minimax Problems
- On Nonconvex Decentralized Gradient Descent
- A review of decentralized optimization focused on information flows of decomposition algorithms
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks
- An Improved Convergence Analysis for Decentralized Online Stochastic Non-Convex Optimization
- Non-Convex Distributed Optimization
- Recent theoretical advances in decentralized distributed convex optimization
- Decentralized personalized federated learning: lower bounds and optimal algorithm for all personalization modes
- Convergence results of a nested decentralized gradient method for non-strongly convex problems
- Decentralized Dynamic Optimization Through the Alternating Direction Method of Multipliers
This page was built for publication: On decentralized nonsmooth optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6134043)