Stochastic mirror descent method for distributed multi-agent optimization
From MaRDI portal
Publication:1670526
DOI10.1007/S11590-016-1071-ZzbMATH Open1405.90036OpenAlexW2512769831MaRDI QIDQ1670526FDOQ1670526
Authors: Jueyou Li, G. Q. Li, Changzhi Wu, Zhiyou Wu
Publication date: 5 September 2018
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11590-016-1071-z
Recommendations
- Distributed stochastic subgradient projection algorithms for convex optimization
- Distributed stochastic mirror descent algorithm for resource allocation problem
- Inexact dual averaging method for distributed multi-agent optimization
- Optimal distributed stochastic mirror descent for strongly convex optimization
- Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
Convex programming (90C25) Deterministic network models in operations research (90B10) Stochastic programming (90C15)
Cites Work
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Title not available (Why is that?)
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Gradient-free method for nonsmooth distributed optimization
- Title not available (Why is that?)
- Constrained Consensus and Optimization in Multi-Agent Networks
- Distributed proximal-gradient method for convex optimization with inequality constraints
- Consensus Problems in Networks of Agents With Switching Topology and Time-Delays
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Dual averaging methods for regularized stochastic learning and online optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- On Distributed Convex Optimization Under Inequality and Equality Constraints
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed stochastic subgradient projection algorithms for convex optimization
- Distributed average consensus with least-mean-square deviation
- The effect of deterministic noise in subgradient methods
- On stochastic subgradient mirror-descent algorithm with weighted averaging
- Joint and separate convexity of the Bregman distance.
- Incremental stochastic subgradient algorithms for convex optimization
- Randomized smoothing for stochastic optimization
- Title not available (Why is that?)
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
Cited In (20)
- Gradient-free algorithms for distributed online convex optimization
- Distributed subgradient method for multi-agent optimization with quantized communication
- Distributed Bregman-distance algorithms for min-max optimization
- Distributed heterogeneous multi-agent optimization with stochastic sub-gradient
- Stochastic mirror descent for convex optimization with consensus constraints
- Distributed stochastic gradient tracking methods
- Distributed stochastic subgradient projection algorithms for convex optimization
- Distributed mirror descent algorithm over unbalanced digraphs based on gradient weighting technique
- Inexact dual averaging method for distributed multi-agent optimization
- Ergodic mirror descent
- Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme
- Projected subgradient based distributed convex optimization with transmission noises
- Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
- Event-triggered distributed online convex optimization with delayed bandit feedback
- Distributed stochastic mirror descent algorithm for resource allocation problem
- Optimal distributed stochastic mirror descent for strongly convex optimization
- Linear convergence rate analysis of a class of exact first-order distributed methods for weight-balanced time-varying networks and uncoordinated step sizes
- Distributed Coupled Multiagent Stochastic Optimization
- Approximate dual averaging method for multiagent saddle-point problems with stochastic subgradients
- A new class of distributed optimization algorithms: application to regression of distributed data
This page was built for publication: Stochastic mirror descent method for distributed multi-agent optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1670526)