Distributed stochastic subgradient projection algorithms for convex optimization
From MaRDI portal
Publication:620442
DOI10.1007/S10957-010-9737-7zbMATH Open1254.90171arXiv0811.2595OpenAlexW2066332749MaRDI QIDQ620442FDOQ620442
Angelia Nedić, Venugopal V. Veeravalli, S. Sundhar Ram
Publication date: 19 January 2011
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Abstract: We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.
Full work available at URL: https://arxiv.org/abs/0811.2595
Convex programming (90C25) Stochastic programming (90C15) Stochastic network models in operations research (90B15)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Convergence rate of incremental subgradient algorithms
- Convex Analysis
- Subgradient methods for saddle-point problems
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Coordination of groups of mobile autonomous agents using nearest neighbor rules
- Handbook of applied optimization
- Gradient Convergence in Gradient methods with Errors
- A Convergent Incremental Gradient Method with a Constant Step Size
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- Incremental subgradient methods for nondifferentiable optimization
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed average consensus with least-mean-square deviation
- stochastic quasigradient methods and their application to system optimization†
- Convexity and characterization of optimal policies in a dynamic routing problem
- Error stability properties of generalized gradient-type algorithms
- Convergence Speed in Distributed Consensus and Averaging
- Distributed Consensus Algorithms in Sensor Networks With Imperfect Communication: Link Failures and Channel Noise
- Incremental Stochastic Subgradient Algorithms for Convex Optimization
Cited In (96)
- Distributed convex optimization with coupling constraints over time-varying directed graphs
- Distributed resource allocation over random networks based on stochastic approximation
- An adaptive online learning algorithm for distributed convex optimization with coupled constraints over unbalanced directed graphs
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Asynchronous gossip-based gradient-free method for multiagent optimization
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed stochastic nonsmooth nonconvex optimization
- Distributed quasi-monotone subgradient algorithm for nonsmooth convex optimization over directed graphs
- Distributed optimization over directed graphs with row stochasticity and constraint regularity
- Communication-computation tradeoff in distributed consensus optimization for MPC-based coordinated control under wireless communications
- Stopping rules for optimization algorithms based on stochastic approximation
- Distributed stochastic gradient tracking methods
- Event-triggered zero-gradient-sum distributed consensus optimization over directed networks
- Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
- Dual averaging with adaptive random projection for solving evolving distributed optimization problems
- Distributed multi-task classification: a decentralized online learning approach
- On the linear convergence of two decentralized algorithms
- Incremental proximal methods for large scale convex optimization
- Gradient-free method for nonsmooth distributed optimization
- A decentralized multi-objective optimization algorithm
- Distributed Nash equilibrium computation in aggregative games: an event-triggered algorithm
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Inexact dual averaging method for distributed multi-agent optimization
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- Primal-dual algorithm for distributed constrained optimization
- Distributed primal–dual stochastic subgradient algorithms for multi‐agent optimization under inequality constraints
- Inexact stochastic subgradient projection method for stochastic equilibrium problems with nonmonotone bifunctions: application to expected risk minimization in machine learning
- Likelihood Inference for Large Scale Stochastic Blockmodels With Covariates Based on a Divide-and-Conquer Parallelizable Algorithm With Communication
- Stochastic mirror descent method for distributed multi-agent optimization
- Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme
- On the convergence of decentralized gradient descent
- Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent
- Zeroth-order algorithms for stochastic distributed nonconvex optimization
- A dual approach for optimal algorithms in distributed optimization over networks
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- Linear Time Average Consensus and Distributed Optimization on Fixed Graphs
- Revisiting EXTRA for Smooth Distributed Optimization
- A multi-scale method for distributed convex optimization with constraints
- Distributed optimization methods for nonconvex problems with inequality constraints over time-varying networks
- Convergence results of a nested decentralized gradient method for non-strongly convex problems
- Newton-like Method with Diagonal Correction for Distributed Optimization
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Distributed Stochastic Optimization via Matrix Exponential Learning
- Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions
- Distributed asynchronous incremental subgradient methods
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Optimal distributed stochastic mirror descent for strongly convex optimization
- Distributed Stochastic Approximation with Local Projections
- Distributed algorithms for aggregative games on graphs
- Distributed multi-agent optimization subject to nonidentical constraints and communication delays
- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
- Asynchronous Algorithms for Computing Equilibrium Prices in a Capital Asset Pricing Model
- Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks
- Distributed constrained optimization via continuous-time mirror design
- Distributed optimization with closed convex set for multi-agent networks over directed graphs
- A new class of distributed optimization algorithms: application to regression of distributed data
- Distributed mean reversion online portfolio strategy with stock network
- A Distributed Boyle--Dykstra--Han Scheme
- An accelerated exact distributed first-order algorithm for optimization over directed networks
- Strong consistency of random gradient‐free algorithms for distributed optimization
- Gradient-free algorithms for distributed online convex optimization
- Consensus-based distributed optimisation of multi-agent networks via a two level subgradient-proximal algorithm
- String-averaging incremental stochastic subgradient algorithms
- Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
- Distributed heterogeneous multi-agent optimization with stochastic sub-gradient
- Stochastic mirror descent for convex optimization with consensus constraints
- Decentralized nonconvex optimization with guaranteed privacy and accuracy
- Distributed proximal‐gradient algorithms for nonsmooth convex optimization of second‐order multiagent systems
- Distributed stochastic optimization algorithm with non-consistent constraints in time-varying unbalanced networks
- Distributed mirror descent algorithm over unbalanced digraphs based on gradient weighting technique
- Distributed Deterministic Asynchronous Algorithms in Time-Varying Graphs Through Dykstra Splitting
- An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization
- A continuous-time neurodynamic approach and its discretization for distributed convex optimization over multi-agent systems
- Distributed primal-dual optimisation method with uncoordinated time-varying step-sizes
- Distributed model predictive control for linear systems under communication noise: algorithm, theory and implementation
- Distributed projection‐free algorithm for constrained aggregative optimization
- A collective neurodynamic penalty approach to nonconvex distributed constrained optimization
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Privacy-preserving dual stochastic push-sum algorithm for distributed constrained optimization
- Distributed Newton methods for strictly convex consensus optimization problems in multi-agent networks
- Computing over Unreliable Communication Networks
- An asynchronous subgradient-proximal method for solving additive convex optimization problems
- Distributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphs
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Noise-to-state exponentially stable distributed convex optimization on weight-balanced digraphs
- Variance-reduced reshuffling gradient descent for nonconvex optimization: centralized and distributed algorithms
- Distributed Bregman-Distance Algorithms for Min-Max Optimization
- Containment control of systems with constraints and bounded disturbances
- Asymptotic Properties of Primal-Dual Algorithm for Distributed Stochastic Optimization over Random Networks with Imperfect Communications
This page was built for publication: Distributed stochastic subgradient projection algorithms for convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q620442)