Strong consistency of random gradient-free algorithms for distributed optimization
From MaRDI portal
Publication:5346596
Recommendations
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Gradient-free push-sum method for strongly convex distributed optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- Gradient-free method for nonsmooth distributed optimization
Cites work
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- Cooperative distributed multi-agent optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Gradient-free method for nonsmooth distributed optimization
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Incremental stochastic subgradient algorithms for convex optimization
- Incremental subgradient methods for nondifferentiable optimization
- Randomized optimal consensus of multi-agent systems
Cited in
(14)- Asynchronous gossip-based gradient-free method for multiagent optimization
- Privacy-preserving distributed projected one-point bandit online optimization over directed graphs
- Gradient-free push-sum method for strongly convex distributed optimization
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes
- Distributed multi-agent optimization with state-dependent communication
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Gradient-free method for nonsmooth distributed optimization
- A gradient‐free distributed optimization method for convex sum of nonconvex cost functions
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- A causal filter of gradient information for enhanced robustness and resilience in distributed convex optimization
- A fixed step distributed proximal gradient push‐pull algorithm based on integral quadratic constraint
- Gradient-free distributed optimization with exact convergence
- A resilient distributed optimization strategy against false data injection attacks
This page was built for publication: Strong consistency of random gradient-free algorithms for distributed optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5346596)