Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
From MaRDI portal
Publication:2288705
Abstract: We consider the problem of minimizing the sum of non-smooth convex functions in non-Euclidean spaces, e.g., probability simplex, via only local computation and communication on an undirected graph. We propose two algorithms motivated by mass-spring-damper dynamics. The first algorithm uses an explicit update that computes subgradients and Bregman projections only, and matches the convergence behavior of centralized mirror descent. The second algorithm uses an implicit update that solves a penalized subproblem at each iteration, and achieves the iteration complexity of (mathcal{O}(1/K)). The results are also demonstrated via numerical examples.
Recommendations
- Distributed nonconvex constrained optimization over time-varying digraphs
- Optimal distributed stochastic mirror descent for strongly convex optimization
- Optimal convergence rates for convex distributed optimization in networks
- Distributed coordination for nonsmooth convex optimization via saddle-point dynamics
- A dual approach for optimal algorithms in distributed optimization over networks
Cites work
- scientific article; zbMATH DE number 51132 (Why is no real title available?)
- scientific article; zbMATH DE number 3365044 (Why is no real title available?)
- A Multi-Agent System With a Proportional-Integral Protocol for Distributed Constrained Optimization
- A variational perspective on accelerated methods in optimization
- An iterative row-action method for interval convex programming
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convex optimization: algorithms and complexity
- Decentralized observers with consensus filters for distributed discrete-time linear systems
- Distributed Continuous-Time Algorithm for Constrained Convex Optimizations via Nonsmooth Analysis Approach
- Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed average consensus with least-mean-square deviation
- Distributed constrained optimal consensus of multi-agent systems
- Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Graph theoretic methods in multiagent networks
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Optimal distributed stochastic mirror descent for strongly convex optimization
- Passivity-Based Distributed Optimization With Communication Delays Using PI Consensus Algorithm
- Proximal minimization algorithm with \(D\)-functions
Cited in
(2)
This page was built for publication: Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2288705)