Distributed stochastic subgradient projection algorithms for convex optimization
From MaRDI portal
(Redirected from Publication:620442)
Abstract: We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.
Recommendations
- Stochastic mirror descent method for distributed multi-agent optimization
- Incremental stochastic subgradient algorithms for convex optimization
- Inexact dual averaging method for distributed multi-agent optimization
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
Cites work
- scientific article; zbMATH DE number 4164577 (Why is no real title available?)
- scientific article; zbMATH DE number 51132 (Why is no real title available?)
- scientific article; zbMATH DE number 3638844 (Why is no real title available?)
- scientific article; zbMATH DE number 2121575 (Why is no real title available?)
- A Convergent Incremental Gradient Method with a Constant Step Size
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Convergence rate of incremental subgradient algorithms
- Convergence speed in distributed consensus and averaging
- Convex Analysis
- Convexity and characterization of optimal policies in a dynamic routing problem
- Cooperative distributed multi-agent optimization
- Coordination of groups of mobile autonomous agents using nearest neighbor rules
- Distributed Consensus Algorithms in Sensor Networks With Imperfect Communication: Link Failures and Channel Noise
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed average consensus with least-mean-square deviation
- Error stability properties of generalized gradient-type algorithms
- Gradient Convergence in Gradient methods with Errors
- Handbook of applied optimization
- Incremental stochastic subgradient algorithms for convex optimization
- Incremental subgradient methods for nondifferentiable optimization
- Subgradient methods for saddle-point problems
- stochastic quasigradient methods and their application to system optimization†
Cited in
(only showing first 100 items - show all)- Computing over unreliable communication networks
- Distributed multi-agent optimization subject to nonidentical constraints and communication delays
- Distributed algorithms for aggregative games on graphs
- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
- Containment control of systems with constraints and bounded disturbances
- Distributed constrained optimization via continuous-time mirror design
- Distributed optimization with closed convex set for multi-agent networks over directed graphs
- Approximate dual averaging method for multiagent saddle-point problems with stochastic subgradients
- A new class of distributed optimization algorithms: application to regression of distributed data
- Distributed convex optimization with coupling constraints over time-varying directed graphs
- A Distributed Boyle--Dykstra--Han Scheme
- Fully distributed algorithms for convex optimization problems
- Distributed resource allocation over random networks based on stochastic approximation
- Distributed mean reversion online portfolio strategy with stock network
- An adaptive online learning algorithm for distributed convex optimization with coupled constraints over unbalanced directed graphs
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Asynchronous gossip-based gradient-free method for multiagent optimization
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- An accelerated exact distributed first-order algorithm for optimization over directed networks
- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed stochastic nonsmooth nonconvex optimization
- Distributed quasi-monotone subgradient algorithm for nonsmooth convex optimization over directed graphs
- Distributed optimization over directed graphs with row stochasticity and constraint regularity
- Linear time average consensus and distributed optimization on fixed graphs
- Newton-like method with diagonal correction for distributed optimization
- Communication-computation tradeoff in distributed consensus optimization for MPC-based coordinated control under wireless communications
- Consensus-based distributed optimisation of multi-agent networks via a two level subgradient-proximal algorithm
- Gradient-free algorithms for distributed online convex optimization
- Distributed line search for multiagent convex optimization
- String-averaging incremental stochastic subgradient algorithms
- Distributed Bregman-distance algorithms for min-max optimization
- Stopping rules for optimization algorithms based on stochastic approximation
- Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions
- Distributed multi-agent optimization with state-dependent communication
- Distributed stochastic gradient tracking methods
- Distributed heterogeneous multi-agent optimization with stochastic sub-gradient
- Stochastic mirror descent for convex optimization with consensus constraints
- Event-triggered zero-gradient-sum distributed consensus optimization over directed networks
- Decentralized nonconvex optimization with guaranteed privacy and accuracy
- Asynchronous algorithms for computing equilibrium prices in a capital asset pricing model
- Distributed proximal‐gradient algorithms for nonsmooth convex optimization of second‐order multiagent systems
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Multiuser optimization: distributed algorithms and error analysis
- Dual averaging with adaptive random projection for solving evolving distributed optimization problems
- Distributed multi-task classification: a decentralized online learning approach
- Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
- Incremental proximal methods for large scale convex optimization
- Distributed stochastic optimization algorithm with non-consistent constraints in time-varying unbalanced networks
- Distributed mirror descent algorithm over unbalanced digraphs based on gradient weighting technique
- On the linear convergence of two decentralized algorithms
- Gradient-free method for nonsmooth distributed optimization
- A decentralized multi-objective optimization algorithm
- Distributed Nash equilibrium computation in aggregative games: an event-triggered algorithm
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Asymptotic properties of primal-dual algorithm for distributed stochastic optimization over random networks with imperfect communications
- Strong consistency of random gradient-free algorithms for distributed optimization
- Distributed stochastic approximation with local projections
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Inexact dual averaging method for distributed multi-agent optimization
- An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization
- Primal-dual algorithm for distributed constrained optimization
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- A continuous-time neurodynamic approach and its discretization for distributed convex optimization over multi-agent systems
- Inexact stochastic subgradient projection method for stochastic equilibrium problems with nonmonotone bifunctions: application to expected risk minimization in machine learning
- Likelihood Inference for Large Scale Stochastic Blockmodels With Covariates Based on a Divide-and-Conquer Parallelizable Algorithm With Communication
- Distributed primal-dual optimisation method with uncoordinated time-varying step-sizes
- Distributed model predictive control for linear systems under communication noise: algorithm, theory and implementation
- Subgradient averaging for multi-agent optimisation with different constraint sets
- Stochastic mirror descent method for distributed multi-agent optimization
- Distributed projection‐free algorithm for constrained aggregative optimization
- A collective neurodynamic penalty approach to nonconvex distributed constrained optimization
- Privacy-preserving dual stochastic push-sum algorithm for distributed constrained optimization
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Distributed Newton methods for strictly convex consensus optimization problems in multi-agent networks
- Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme
- On the convergence of decentralized gradient descent
- Zeroth-order algorithms for stochastic distributed nonconvex optimization
- Incremental stochastic subgradient algorithms for convex optimization
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- Distributed constrained stochastic optimal consensus
- A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights
- A dual approach for optimal algorithms in distributed optimization over networks
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- An asynchronous subgradient-proximal method for solving additive convex optimization problems
- Distributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphs
- Fast decentralized nonconvex finite-sum optimization with recursive variance reduction
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Noise-to-state exponentially stable distributed convex optimization on weight-balanced digraphs
- Fully Distributed Algorithms for Convex Optimization Problems
- Variance-reduced reshuffling gradient descent for nonconvex optimization: centralized and distributed algorithms
- A multi-scale method for distributed convex optimization with constraints
- Projected subgradient based distributed convex optimization with transmission noises
- Revisiting EXTRA for Smooth Distributed Optimization
- Distributed optimization methods for nonconvex problems with inequality constraints over time-varying networks
- Primal-dual \(\varepsilon\)-subgradient method for distributed optimization
- Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
- On stochastic gradient and subgradient methods with adaptive steplength sequences
This page was built for publication: Distributed stochastic subgradient projection algorithms for convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q620442)