Strong consistency of random gradient-free algorithms for distributed optimization
From MaRDI portal
Publication:5346596
DOI10.1002/OCA.2254zbMATH Open1362.93172OpenAlexW2344594310MaRDI QIDQ5346596FDOQ5346596
Authors: Xingmin Chen, Chao Gao
Publication date: 26 May 2017
Published in: Optimal Control Applications \& Methods (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/oca.2254
Recommendations
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Gradient-free push-sum method for strongly convex distributed optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- Gradient-free method for nonsmooth distributed optimization
convergence analysismulti-agent systemsdistributed optimizationGaussian smoothingrandom gradient-free method
Cites Work
- Gradient-free method for nonsmooth distributed optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Distributed Subgradient Methods for Multi-Agent Optimization
- Randomized optimal consensus of multi-agent systems
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Incremental subgradient methods for nondifferentiable optimization
- Cooperative distributed multi-agent optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Incremental stochastic subgradient algorithms for convex optimization
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
Cited In (14)
- Asynchronous gossip-based gradient-free method for multiagent optimization
- Privacy-preserving distributed projected one-point bandit online optimization over directed graphs
- Gradient-free push-sum method for strongly convex distributed optimization
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Distributed multi-agent optimization with state-dependent communication
- Gradient-free method for nonsmooth distributed optimization
- A gradient‐free distributed optimization method for convex sum of nonconvex cost functions
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- A causal filter of gradient information for enhanced robustness and resilience in distributed convex optimization
- A fixed step distributed proximal gradient push‐pull algorithm based on integral quadratic constraint
- A resilient distributed optimization strategy against false data injection attacks
- Gradient-free distributed optimization with exact convergence
This page was built for publication: Strong consistency of random gradient-free algorithms for distributed optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5346596)