A gradient‐free distributed optimization method for convex sum of nonconvex cost functions
From MaRDI portal
Publication:6069332
DOI10.1002/RNC.6266zbMATH Open1528.93015arXiv2104.10971OpenAlexW3154894049MaRDI QIDQ6069332FDOQ6069332
Authors: Yipeng Pang, Guoqiang Hu
Publication date: 16 December 2023
Published in: International Journal of Robust and Nonlinear Control (Search for Journal in Brave)
Abstract: This paper presents a special type of distributed optimization problems, where the summation of agents' local cost functions (i.e., global cost function) is convex, but each individual can be non-convex. Unlike most distributed optimization algorithms by taking the advantages of gradient, the considered problem is allowed to be non-smooth, and the gradient information is unknown to the agents. To solve the problem, a Gaussian-smoothing technique is introduced and a gradient-free method is proposed. We prove that each agent's iterate approximately converges to the optimal solution both with probability 1 and in mean, and provide an upper bound on the optimality gap, characterized by the difference between the functional value of the iterate and the optimal value. The performance of the proposed algorithm is demonstrated by a numerical example and an application in privacy enhancement.
Full work available at URL: https://arxiv.org/abs/2104.10971
Cites Work
- Gradient-free method for nonsmooth distributed optimization
- Random gradient-free minimization of convex functions
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Distributed Optimization Over Time-Varying Directed Graphs
- Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models
- Average consensus on general strongly connected digraphs
- Random optimization
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- Distributed Time-Varying Quadratic Optimization for Multiple Agents Under Undirected Graphs
- An Approximate Dual Subgradient Algorithm for Multi-Agent Non-Convex Optimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Differentially Private Distributed Constrained Optimization
- Differentially Private Distributed Convex Optimization via Functional Perturbation
- Distributed Subgradient Projection Algorithm Over Directed Graphs
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Constrained Consensus Algorithms With Fixed Step Size for Distributed Convex Optimization Over Multiagent Networks
- Randomized Gradient-Free Distributed Optimization Methods for a Multiagent System With Unknown Cost Function
- Online Distributed Convex Optimization on Dynamic Networks
- Online Distributed Optimization With Strongly Pseudoconvex-Sum Cost Functions
- Distributed Online Convex Optimization With Time-Varying Coupled Inequality Constraints
- Strong consistency of random gradient‐free algorithms for distributed optimization
- Distributed Robust Multicell Coordinated Beamforming With Imperfect CSI: An ADMM Approach
Cited In (2)
This page was built for publication: A gradient‐free distributed optimization method for convex sum of nonconvex cost functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6069332)