Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
DOI10.1007/S11424-021-9355-5zbMATH Open1472.93006OpenAlexW3146350192MaRDI QIDQ2235622FDOQ2235622
Authors: Jiexiang Wang, Keli Fu, Yu Gu, Tao Li
Publication date: 21 October 2021
Published in: Journal of Systems Science and Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11424-021-9355-5
Recommendations
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Distributed stochastic gradient tracking methods
- Distributed nonsmooth convex optimization over Markovian switching random networks with two step-sizes
- Distributed subgradient method for multi-agent optimization with quantized communication
Convex programming (90C25) Random graphs (graph-theoretic aspects) (05C80) Multi-agent systems (93A16) Networked control (93B70)
Cites Work
- Robust Estimation of a Location Parameter
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Discrete-time dynamic average consensus
- Dual averaging methods for regularized stochastic learning and online optimization
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Measure theory
- Fast Distributed Gradient Methods
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- Convex optimization: algorithms and complexity
- Distributed Spectrum Sensing for Cognitive Radio Networks by Exploiting Sparsity
- Distributed Consensus Algorithms in Sensor Networks With Imperfect Communication: Link Failures and Channel Noise
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- A distributed multiple dimensional QoS constrained resource scheduling optimization policy in computational grid
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed stochastic gradient tracking methods
- Distributed Averaging With Random Network Graphs and Noises
- Decentralized Optimization Over Time-Varying Directed Graphs With Row and Column-Stochastic Matrices
Cited In (16)
- Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs
- Convergence Rates of Distributed Gradient Methods Under Random Quantization: A Stochastic Approximation Approach
- Distributed stochastic gradient tracking methods
- Distributed stochastic optimization algorithm with non-consistent constraints in time-varying unbalanced networks
- Strong consistency of random gradient-free algorithms for distributed optimization
- Nabla fractional distributed optimization algorithms over undirected/directed graphs
- Distributed solving linear algebraic equations with switched fractional order dynamics
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- Distributed optimization for degenerate loss functions arising from over-parameterization
- A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights
- Exact spectral-like gradient method for distributed optimization
- Distributed nonsmooth convex optimization over Markovian switching random networks with two step-sizes
- A Distributed SDP Approach for Large-Scale Noisy Anchor-Free Graph Realization with Applications to Molecular Conformation
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Distributed zeroth-order optimization: convergence rates that match centralized counterpart
This page was built for publication: Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2235622)