Gradient-free method for nonsmooth distributed optimization
DOI10.1007/S10898-014-0174-2zbMATH Open1341.90135OpenAlexW2042220749MaRDI QIDQ2018475FDOQ2018475
Authors: Jueyou Li, Changzhi Wu, Qiang Long, Zhiyou Wu
Publication date: 24 March 2015
Published in: Journal of Global Optimization (Search for Journal in Brave)
Full work available at URL: http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/61584
Recommendations
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Gradient-free push-sum method for strongly convex distributed optimization
- Strong consistency of random gradient-free algorithms for distributed optimization
- Gradient-free distributed optimization with exact convergence
- Distributed quasi-monotone subgradient algorithm for nonsmooth convex optimization over directed graphs
Convex programming (90C25) Programming involving graphs or networks (90C35) Derivative-free methods and methods using generalized derivatives (90C56)
Cites Work
- Matrix Analysis
- Primal-dual subgradient methods for convex problems
- Introduction to Stochastic Search and Optimization
- Title not available (Why is that?)
- Constrained Consensus and Optimization in Multi-Agent Networks
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Random gradient-free minimization of convex functions
- Dual averaging methods for regularized stochastic learning and online optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Incremental subgradient methods for nondifferentiable optimization
- Title not available (Why is that?)
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed stochastic subgradient projection algorithms for convex optimization
- Distributed average consensus with least-mean-square deviation
- Incremental proximal methods for large scale convex optimization
- Robust identification
- Global optimization for molecular clusters using a new smoothing approach
- Randomized smoothing for stochastic optimization
- Convergence rate for consensus with delays
- New horizons in sphere-packing theory, part II: Lattice-based derivative-free optimization via global surrogates
- Piecewise partially separable functions and a derivative-free algorithm for large scale nonsmooth optimization
Cited In (19)
- Distributed convex optimization with coupling constraints over time-varying directed graphs
- An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints
- Strong consistency of random gradient‐free algorithms for distributed optimization
- Distributed subgradient method for multi-agent optimization with quantized communication
- Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm
- Splitting proximal with penalization schemes for additive convex hierarchical minimization problems
- A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints
- A gradient‐free distributed optimization method for convex sum of nonconvex cost functions
- Recent theoretical advances in decentralized distributed convex optimization
- Stochastic mirror descent method for distributed multi-agent optimization
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Harnessing Smoothness to Accelerate Distributed Optimization
- A fully distributed ADMM-based dispatch approach for virtual power plant problems
- Sparsity-promoting distributed charging control for plug-in electric vehicles over distribution networks
- Incremental gradient-free method for nonsmooth distributed optimization
- Distributed optimization methods for nonconvex problems with inequality constraints over time-varying networks
- Gradient-free distributed optimization with exact convergence
- An objective penalty function-based method for inequality constrained minimization problem
This page was built for publication: Gradient-free method for nonsmooth distributed optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2018475)