Distributed stochastic subgradient projection algorithms for convex optimization
From MaRDI portal
Abstract: We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.
Recommendations
- Stochastic mirror descent method for distributed multi-agent optimization
- Incremental stochastic subgradient algorithms for convex optimization
- Inexact dual averaging method for distributed multi-agent optimization
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
Cites work
- scientific article; zbMATH DE number 4164577 (Why is no real title available?)
- scientific article; zbMATH DE number 51132 (Why is no real title available?)
- scientific article; zbMATH DE number 3638844 (Why is no real title available?)
- scientific article; zbMATH DE number 2121575 (Why is no real title available?)
- A Convergent Incremental Gradient Method with a Constant Step Size
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Convergence rate of incremental subgradient algorithms
- Convergence speed in distributed consensus and averaging
- Convex Analysis
- Convexity and characterization of optimal policies in a dynamic routing problem
- Cooperative distributed multi-agent optimization
- Coordination of groups of mobile autonomous agents using nearest neighbor rules
- Distributed Consensus Algorithms in Sensor Networks With Imperfect Communication: Link Failures and Channel Noise
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed average consensus with least-mean-square deviation
- Error stability properties of generalized gradient-type algorithms
- Gradient Convergence in Gradient methods with Errors
- Handbook of applied optimization
- Incremental stochastic subgradient algorithms for convex optimization
- Incremental subgradient methods for nondifferentiable optimization
- Subgradient methods for saddle-point problems
- stochastic quasigradient methods and their application to system optimization†
Cited in
(only showing first 100 items - show all)- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed resource allocation over random networks based on stochastic approximation
- An adaptive online learning algorithm for distributed convex optimization with coupled constraints over unbalanced directed graphs
- Incremental stochastic subgradient algorithms for convex optimization
- Asynchronous algorithms for computing equilibrium prices in a capital asset pricing model
- Inexact stochastic subgradient projection method for stochastic equilibrium problems with nonmonotone bifunctions: application to expected risk minimization in machine learning
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Zeroth-order algorithms for stochastic distributed nonconvex optimization
- Distributed optimization methods for nonconvex problems with inequality constraints over time-varying networks
- A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights
- Fully distributed algorithms for convex optimization problems
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- Stopping rules for optimization algorithms based on stochastic approximation
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Distributed stochastic nonsmooth nonconvex optimization
- Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
- Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- Distributed algorithms for aggregative games on graphs
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Asynchronous gossip-based gradient-free method for multiagent optimization
- On the linear convergence of two decentralized algorithms
- Distributed convex optimization with coupling constraints over time-varying directed graphs
- Incremental proximal methods for large scale convex optimization
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Distributed stochastic gradient tracking methods
- Gradient-free method for nonsmooth distributed optimization
- A decentralized multi-objective optimization algorithm
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Distributed constrained optimization via continuous-time mirror design
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Distributed optimization with closed convex set for multi-agent networks over directed graphs
- Approximate dual averaging method for multiagent saddle-point problems with stochastic subgradients
- Distributed multi-agent optimization subject to nonidentical constraints and communication delays
- Revisiting EXTRA for Smooth Distributed Optimization
- Dual averaging with adaptive random projection for solving evolving distributed optimization problems
- Likelihood Inference for Large Scale Stochastic Blockmodels With Covariates Based on a Divide-and-Conquer Parallelizable Algorithm With Communication
- A new class of distributed optimization algorithms: application to regression of distributed data
- Distributed Stochastic Optimization via Matrix Exponential Learning
- Optimal distributed stochastic mirror descent for strongly convex optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Distributed multi-agent optimization with state-dependent communication
- Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme
- Event-triggered zero-gradient-sum distributed consensus optimization over directed networks
- Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions
- Primal-dual algorithm for distributed constrained optimization
- Distributed asynchronous incremental subgradient methods
- Multiuser optimization: distributed algorithms and error analysis
- Distributed quasi-monotone subgradient algorithm for nonsmooth convex optimization over directed graphs
- Distributed optimization over directed graphs with row stochasticity and constraint regularity
- Newton-like method with diagonal correction for distributed optimization
- On the convergence of decentralized gradient descent
- Distributed multi-task classification: a decentralized online learning approach
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- Communication-computation tradeoff in distributed consensus optimization for MPC-based coordinated control under wireless communications
- Stochastic mirror descent method for distributed multi-agent optimization
- Linear time average consensus and distributed optimization on fixed graphs
- Inexact dual averaging method for distributed multi-agent optimization
- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- An accelerated exact distributed first-order algorithm for optimization over directed networks
- Distributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphs
- Distributed proximal‐gradient algorithms for nonsmooth convex optimization of second‐order multiagent systems
- Distributed mean reversion online portfolio strategy with stock network
- A Distributed Boyle--Dykstra--Han Scheme
- An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization
- Strong consistency of random gradient-free algorithms for distributed optimization
- Variance-reduced reshuffling gradient descent for nonconvex optimization: centralized and distributed algorithms
- Distributed model predictive control for linear systems under communication noise: algorithm, theory and implementation
- Subgradient averaging for multi-agent optimisation with different constraint sets
- Fully Distributed Algorithms for Convex Optimization Problems
- An asynchronous subgradient-proximal method for solving additive convex optimization problems
- Distributed Bregman-distance algorithms for min-max optimization
- Containment control of systems with constraints and bounded disturbances
- String-averaging incremental stochastic subgradient algorithms
- Privacy-preserving dual stochastic push-sum algorithm for distributed constrained optimization
- Asymptotic properties of primal-dual algorithm for distributed stochastic optimization over random networks with imperfect communications
- Distributed stochastic optimization algorithm with non-consistent constraints in time-varying unbalanced networks
- Gradient-free algorithms for distributed online convex optimization
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Distributed stochastic approximation with local projections
- Convergence results of a nested decentralized gradient method for non-strongly convex problems
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- Consensus-based distributed optimisation of multi-agent networks via a two level subgradient-proximal algorithm
- Distributed mirror descent algorithm over unbalanced digraphs based on gradient weighting technique
- A dual approach for optimal algorithms in distributed optimization over networks
- Distributed projection‐free algorithm for constrained aggregative optimization
- A collective neurodynamic penalty approach to nonconvex distributed constrained optimization
- Distributed Newton methods for strictly convex consensus optimization problems in multi-agent networks
- Computing over unreliable communication networks
- Distributed heterogeneous multi-agent optimization with stochastic sub-gradient
- Primal-dual \(\varepsilon\)-subgradient method for distributed optimization
- A multi-scale method for distributed convex optimization with constraints
- Projected subgradient based distributed convex optimization with transmission noises
- Stochastic mirror descent for convex optimization with consensus constraints
- Distributed Nash equilibrium computation in aggregative games: an event-triggered algorithm
- Distributed constrained stochastic optimal consensus
- Fast decentralized nonconvex finite-sum optimization with recursive variance reduction
This page was built for publication: Distributed stochastic subgradient projection algorithms for convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q620442)