Triggered gradient tracking for asynchronous distributed optimization
From MaRDI portal
Publication:2103702
DOI10.1016/j.automatica.2022.110726zbMath1505.93231arXiv2203.02210OpenAlexW4309287410MaRDI QIDQ2103702
Ivano Notarnicola, Guido Carnevale, Lorenzo Marconi, Giuseppe Notarstefano
Publication date: 9 December 2022
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2203.02210
Existence theories for free problems in two or more independent variables (49J10) Exponential stability (93D23) Networked control (93B70) Consensus (93D50)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Discrete-time dynamic average consensus
- Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication
- A distributed continuous-time modified Newton-Raphson algorithm
- Distributed nonconvex constrained optimization over time-varying digraphs
- Newton-Raphson Consensus for Distributed Convex Optimization
- A Second-Order Multi-Agent Network for Bound-Constrained Distributed Optimization
- Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs
- Event-Triggered Quantized Communication-Based Distributed Convex Optimization
- Passivity-Based Distributed Optimization With Communication Delays Using PI Consensus Algorithm
- Distributed Continuous-Time Algorithm for Constrained Convex Optimizations via Nonsmooth Analysis Approach
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods
- Harnessing Smoothness to Accelerate Distributed Optimization
- A variational perspective on accelerated methods in optimization
- Distributed Subgradient Method With Edge-Based Event-Triggered Communication
- Input-Feedforward-Passivity-Based Distributed Optimization Over Jointly Connected Balanced Digraphs
- Distributed Subgradient Methods for Multi-Agent Optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms
- Distributed Continuous-Time Optimization: Nonuniform Gradient Gains, Finite-Time Convergence, and Convex Constraint Set
- A Multi-Agent System With a Proportional-Integral Protocol for Distributed Constrained Optimization
- ADD-OPT: Accelerated Distributed Directed Optimization
- Push–Pull Gradient Methods for Distributed Optimization in Networks
This page was built for publication: Triggered gradient tracking for asynchronous distributed optimization