Zeroth-order algorithms for stochastic distributed nonconvex optimization
From MaRDI portal
Abstract: In this paper, we consider a stochastic distributed nonconvex optimization problem with the cost function being distributed over agents having access only to zeroth-order (ZO) information of the cost. This problem has various machine learning applications. As a solution, we propose two distributed ZO algorithms, in which at each iteration each agent samples the local stochastic ZO oracle at two points with a time-varying smoothing parameter. We show that the proposed algorithms achieve the linear speedup convergence rate for smooth cost functions under the state-dependent variance assumptions which are more general than the commonly used bounded variance and Lipschitz assumptions, and convergence rate when the global cost function additionally satisfies the Polyak--{L}ojasiewicz (P--{L}) condition in addition, where and are the dimension of the decision variable and the total number of iterations, respectively. To the best of our knowledge, this is the first linear speedup result for distributed ZO algorithms, which enables systematic processing performance improvements by adding more agents. We also show that the proposed algorithms converge linearly under the relative bounded second moment assumptions and the P--{L} condition. We demonstrate through numerical experiments the efficiency of our algorithms on generating adversarial examples from deep neural networks in comparison with baseline and recently proposed centralized and distributed ZO algorithms.
Recommendations
- Distributed Zero-Order Algorithms for Nonconvex Multiagent Optimization
- Distributed stochastic nonsmooth nonconvex optimization
- Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The Continuous-Time Case
- Linear Convergence of First- and Zeroth-Order Primal–Dual Algorithms for Distributed Nonconvex Optimization
- Randomized Algorithms for Distributed Nonlinear Optimization Under Sparsity Constraints
- A local-minimization-free zero-gradient-sum algorithm for distributed optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Non-Convex Distributed Optimization
- A zeroth order method for stochastic weakly convex optimization
- Distributed Stochastic Consensus Optimization With Momentum for Nonconvex Nonsmooth Problems
Cites work
- A Simplex Method for Function Minimization
- A new one-point residual-feedback oracle for black-box learning and control
- A stochastic subspace approach to gradient-free optimization in high dimensions
- Accelerated Distributed Nesterov Gradient Descent
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Derivative-free and blackbox optimization
- Derivative-free optimization methods
- Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed Zero-Order Algorithms for Nonconvex Multiagent Optimization
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Global convergence of general derivative-free trust-region algorithms to first- and second-order critical points
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Harnessing Smoothness to Accelerate Distributed Optimization
- Introduction to Derivative-Free Optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Random gradient-free minimization of convex functions
- Random optimization
- Randomized Gradient-Free Distributed Optimization Methods for a Multiagent System With Unknown Cost Function
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Self-Correcting Geometry in Model-Based Algorithms for Derivative-Free Unconstrained Optimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic three points method for unconstrained smooth minimization
- Wedge trust region method for derivative free optimization.
- ZONE: Zeroth-Order Nonconvex Multiagent Optimization Over Networks
- `` Direct Search Solution of Numerical and Statistical Problems
Cited in
(7)- Distributed stochastic nonsmooth nonconvex optimization
- Zeroth-order stochastic compositional algorithms for risk-aware learning
- Zeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexities
- A zeroth order method for stochastic weakly convex optimization
- Stochastic zeroth order descent with structured directions
- Variance-reduced reshuffling gradient descent for nonconvex optimization: centralized and distributed algorithms
- Distributed zeroth-order optimization: convergence rates that match centralized counterpart
This page was built for publication: Zeroth-order algorithms for stochastic distributed nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2151863)