Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
From MaRDI portal
Publication:2288705
DOI10.1016/J.AUTOMATICA.2019.108703zbMATH Open1430.93091arXiv1808.01999OpenAlexW2991627789WikidataQ126670366 ScholiaQ126670366MaRDI QIDQ2288705FDOQ2288705
Authors: Yue Yu, Behçet Açıkmeşe, Mehran Mesbahi
Publication date: 20 January 2020
Published in: Automatica (Search for Journal in Brave)
Abstract: We consider the problem of minimizing the sum of non-smooth convex functions in non-Euclidean spaces, e.g., probability simplex, via only local computation and communication on an undirected graph. We propose two algorithms motivated by mass-spring-damper dynamics. The first algorithm uses an explicit update that computes subgradients and Bregman projections only, and matches the convergence behavior of centralized mirror descent. The second algorithm uses an implicit update that solves a penalized subproblem at each iteration, and achieves the iteration complexity of (mathcal{O}(1/K)). The results are also demonstrated via numerical examples.
Full work available at URL: https://arxiv.org/abs/1808.01999
Recommendations
- Distributed nonconvex constrained optimization over time-varying digraphs
- Optimal distributed stochastic mirror descent for strongly convex optimization
- Optimal convergence rates for convex distributed optimization in networks
- Distributed coordination for nonsmooth convex optimization via saddle-point dynamics
- A dual approach for optimal algorithms in distributed optimization over networks
Cites Work
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Title not available (Why is that?)
- An iterative row-action method for interval convex programming
- Graph theoretic methods in multiagent networks
- Title not available (Why is that?)
- Constrained Consensus and Optimization in Multi-Agent Networks
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed constrained optimal consensus of multi-agent systems
- Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs
- Proximal minimization algorithm with \(D\)-functions
- Distributed average consensus with least-mean-square deviation
- Convex optimization: algorithms and complexity
- Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication
- Decentralized observers with consensus filters for distributed discrete-time linear systems
- Optimal distributed stochastic mirror descent for strongly convex optimization
- A variational perspective on accelerated methods in optimization
- A Multi-Agent System With a Proportional-Integral Protocol for Distributed Constrained Optimization
- Distributed Continuous-Time Algorithm for Constrained Convex Optimizations via Nonsmooth Analysis Approach
- Passivity-Based Distributed Optimization With Communication Delays Using PI Consensus Algorithm
Cited In (2)
Uses Software
This page was built for publication: Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2288705)