Triggered gradient tracking for asynchronous distributed optimization
From MaRDI portal
(Redirected from Publication:2103702)
Abstract: This paper proposes Asynchronous Triggered Gradient Tracking, i.e., a distributed optimization algorithm to solve consensus optimization over networks with asynchronous communication. As a building block, we start by devising the continuous-time counterpart of the recently proposed (discrete-time) distributed gradient tracking called Continuous Gradient Tracking. By using a Lyapunov approach, we prove exponential stability of the equilibrium corresponding to agents' estimates being consensual to the optimal solution of the problem, with arbitrary initialization of the local estimates. Then, we propose two triggered versions of the algorithm. In the first one, the agents continuously integrate their local dynamics and exchange with neighbors current local variables according to a synchronous communication protocol. In Asynchronous Triggered Gradient Tracking, we propose a totally asynchronous scheme in which each agent sends to neighbors its current local variables based on a triggering condition that depends on a locally verifiable condition. The triggering protocol preserves the linear convergence of the algorithm and excludes the Zeno behavior. By using the stability analysis of Continuous Gradient Tracking as a preparatory result, we show exponential stability of the equilibrium point holds for both triggered algorithms and any estimates' initialization. Finally, numerical simulations validate the effectiveness of the proposed methods showing also the improved performance in terms of inter-agent communication.
Recommendations
- Event-triggered zero-gradient-sum distributed consensus optimization over directed networks
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Two-stage continuous-time triggered algorithms for constrained distributed optimization over directed graphs
- Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication
- Distributed optimization over weight-balanced digraphs with event-triggered communication
Cites work
- scientific article; zbMATH DE number 5347321 (Why is no real title available?)
- scientific article; zbMATH DE number 7370630 (Why is no real title available?)
- scientific article; zbMATH DE number 6936839 (Why is no real title available?)
- A Multi-Agent System With a Proportional-Integral Protocol for Distributed Constrained Optimization
- A Second-Order Multi-Agent Network for Bound-Constrained Distributed Optimization
- A distributed continuous-time modified Newton-Raphson algorithm
- A variational perspective on accelerated methods in optimization
- ADD-OPT: Accelerated Distributed Directed Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- Discrete-time dynamic average consensus
- Distributed Continuous-Time Algorithm for Constrained Convex Optimizations via Nonsmooth Analysis Approach
- Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs
- Distributed Continuous-Time Optimization: Nonuniform Gradient Gains, Finite-Time Convergence, and Convex Constraint Set
- Distributed Subgradient Method With Edge-Based Event-Triggered Communication
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication
- Distributed nonconvex constrained optimization over time-varying digraphs
- Event-Triggered Quantized Communication-Based Distributed Convex Optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Input-Feedforward-Passivity-Based Distributed Optimization Over Jointly Connected Balanced Digraphs
- Newton-Raphson Consensus for Distributed Convex Optimization
- Nonlinear systems.
- Passivity-Based Distributed Optimization With Communication Delays Using PI Consensus Algorithm
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- The approximate duality gap technique: a unified theory of first-order methods
- Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms
Cited in
(3)
This page was built for publication: Triggered gradient tracking for asynchronous distributed optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2103702)