Distributed stochastic nonsmooth nonconvex optimization
From MaRDI portal
Publication:2102823
DOI10.1016/J.ORL.2022.09.001OpenAlexW4288029473MaRDI QIDQ2102823FDOQ2102823
Publication date: 12 December 2022
Published in: Operations Research Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1911.00844
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Clarke Subgradients of Stratifiable Functions
- Constrained Consensus and Optimization in Multi-Agent Networks
- Parallel and distributed successive convex approximation methods for big-data optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Stochastic Approximations and Differential Inclusions
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- Decentralized FrankโWolfe Algorithm for Convex and Nonconvex Problems
- On Nonconvex Decentralized Gradient Descent
- Stochastic Model-Based Minimization of Weakly Convex Functions
Cited In (15)
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- Distributed Stochastic Nonsmooth Nonconvex Optimization
- Distributed Learning in Non-Convex Environmentsโ Part II: Polynomial Escape From Saddle-Points
- A Smooth Double Proximal Primal-Dual Algorithm for a Class of Distributed Nonsmooth Optimization Problems
- Distributed statistical optimization for non-randomly stored big data with application to penalized learning
- Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
- Exponentially Convergent Algorithm Design for Constrained Distributed Optimization via Nonsmooth Approach
- Non-Convex Distributed Optimization
- Zeroth-order algorithms for stochastic distributed nonconvex optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Revisiting EXTRA for Smooth Distributed Optimization
- ADVANCES IN DISTRIBUTED OPTIMIZATION USING PROBABILITY COLLECTIVES
- Distributed Stochastic Optimization via Matrix Exponential Learning
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
Recommendations
- Distributed stochastic subgradient projection algorithms for convex optimization ๐ ๐
- A Distributed Continuous-Time Algorithm for Nonsmooth Constrained Optimization ๐ ๐
- Randomized Algorithms for Distributed Nonlinear Optimization Under Sparsity Constraints ๐ ๐
- Primal-dual stochastic distributed algorithm for constrained convex optimization ๐ ๐
- Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks ๐ ๐
- Non-Convex Distributed Optimization ๐ ๐
- Distributed Continuous-Time Nonsmooth Convex Optimization With Coupled Inequality Constraints ๐ ๐
- Zeroth-order algorithms for stochastic distributed nonconvex optimization ๐ ๐
- Distributed Global Optimization for a Class of Nonconvex Optimization With Coupled Constraints ๐ ๐
- Distributed Stochastic Consensus Optimization With Momentum for Nonconvex Nonsmooth Problems ๐ ๐
This page was built for publication: Distributed stochastic nonsmooth nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2102823)