Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
From MaRDI portal
Recommendations
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Distributed stochastic gradient tracking methods
- Distributed nonsmooth convex optimization over Markovian switching random networks with two step-sizes
- Distributed subgradient method for multi-agent optimization with quantized communication
Cites work
- A distributed multiple dimensional QoS constrained resource scheduling optimization policy in computational grid
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Convex optimization: algorithms and complexity
- Decentralized Optimization Over Time-Varying Directed Graphs With Row and Column-Stochastic Matrices
- Discrete-time dynamic average consensus
- Distributed Averaging With Random Network Graphs and Noises
- Distributed Consensus Algorithms in Sensor Networks With Imperfect Communication: Link Failures and Channel Noise
- Distributed Spectrum Sensing for Cognitive Radio Networks by Exploiting Sparsity
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed stochastic gradient tracking methods
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Dual averaging methods for regularized stochastic learning and online optimization
- Fast Distributed Gradient Methods
- Harnessing Smoothness to Accelerate Distributed Optimization
- Measure theory
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Robust Estimation of a Location Parameter
Cited in
(16)- Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs
- Convergence Rates of Distributed Gradient Methods Under Random Quantization: A Stochastic Approximation Approach
- Distributed stochastic gradient tracking methods
- Distributed stochastic optimization algorithm with non-consistent constraints in time-varying unbalanced networks
- Strong consistency of random gradient-free algorithms for distributed optimization
- Nabla fractional distributed optimization algorithms over undirected/directed graphs
- Distributed solving linear algebraic equations with switched fractional order dynamics
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Distributed optimization for degenerate loss functions arising from over-parameterization
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights
- Exact spectral-like gradient method for distributed optimization
- Distributed nonsmooth convex optimization over Markovian switching random networks with two step-sizes
- A Distributed SDP Approach for Large-Scale Noisy Anchor-Free Graph Realization with Applications to Molecular Conformation
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Distributed zeroth-order optimization: convergence rates that match centralized counterpart
This page was built for publication: Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2235622)