Distributed asynchronous incremental subgradient methods
From MaRDI portal
Publication:2768028
Recommendations
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Incremental gradient-free method for nonsmooth distributed optimization
- A Distributed, Asynchronous, and Incremental Algorithm for Nonconvex Optimization: An ADMM Approach
- The incremental subgradient methods on distributed estimations in-network
- Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
- Distributed stochastic subgradient projection algorithms for convex optimization
- Distributed Proximal Gradient Algorithm for Partially Asynchronous Computer Clusters
- A distributed asynchronous method of multipliers for constrained nonconvex optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
Cited in
(35)- An asynchronous bundle-trust-region method for dual decomposition of stochastic mixed-integer programming
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- An inertial parallel and asynchronous forward-backward iteration for distributed convex optimization
- Projected equation methods for approximate solution of large linear systems
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Communication-efficient algorithms for decentralized and stochastic optimization
- ARock: an algorithmic framework for asynchronous parallel coordinate updates
- Non-ergodic linear convergence property of the delayed gradient descent under the strongly convexity and the Polyak-Łojasiewicz condition
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- An incremental subgradient method on Riemannian manifolds
- Projection algorithms with dynamic stepsize for constrained composite minimization
- scientific article; zbMATH DE number 833651 (Why is no real title available?)
- A smooth inexact penalty reformulation of convex problems with linear constraints
- Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
- Asynchronous parallel algorithms for nonconvex optimization
- Incremental proximal methods for large scale convex optimization
- The incremental subgradient methods on distributed estimations in-network
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Asynchronous Gradient Push
- Stochastic first-order methods with random constraint projection
- Asynchronous Lagrangian scenario decomposition
- Distributed stochastic inertial-accelerated methods with delayed derivatives for nonconvex problems
- Parallel subgradient methods for convex optimization.
- Global convergence rate of proximal incremental aggregated gradient methods
- Distributed Proximal Gradient Algorithm for Partially Asynchronous Computer Clusters
- A Distributed, Asynchronous, and Incremental Algorithm for Nonconvex Optimization: An ADMM Approach
- An asynchronous subgradient-proximal method for solving additive convex optimization problems
- The effect of deterministic noise in subgradient methods
- Primal-dual algorithms for multi-agent structured optimization over message-passing architectures with bounded communication delays
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- Convergence of an asynchronous block-coordinate forward-backward algorithm for convex composite optimization
- Distributed deterministic asynchronous algorithms in time-varying graphs through Dykstra splitting
- Asynchronous level bundle methods
- Near-optimal stochastic approximation for online principal component estimation
This page was built for publication: Distributed asynchronous incremental subgradient methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2768028)