Non-Convex Distributed Optimization
From MaRDI portal
Publication:4589436
DOI10.1109/TAC.2017.2648041zbMATH Open1373.90123arXiv1512.00895OpenAlexW2964137440MaRDI QIDQ4589436FDOQ4589436
Authors: Tatiana Tatarenko, Behrouz Touri
Publication date: 10 November 2017
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: We study distributed non-convex optimization on a time-varying multi-agent network. Each node has access to its own smooth local cost function, and the collective goal is to minimize the sum of these functions. We generalize the results obtained previously to the case of non-convex functions. Under some additional technical assumptions on the gradients we prove the convergence of the distributed push-sum algorithm to some critical point of the objective function. By utilizing perturbations on the update process, we show the almost sure convergence of the perturbed dynamics to a local minimum of the global objective function. Our analysis shows that this noised procedure converges at a rate of .
Full work available at URL: https://arxiv.org/abs/1512.00895
Recommendations
- Distributed stochastic nonsmooth nonconvex optimization
- scientific article; zbMATH DE number 2187960
- Distributed Global Optimization for a Class of Nonconvex Optimization With Coupled Constraints
- A two-level distributed algorithm for nonconvex constrained optimization
- Distributed Continuous-Time Nonsmooth Convex Optimization With Coupled Inequality Constraints
- Distributed optimization over networks
- Randomized Algorithms for Distributed Nonlinear Optimization Under Sparsity Constraints
- Approximations in Distributed Optimization
- Distributed Zero-Order Algorithms for Nonconvex Multiagent Optimization
- On decentralized nonsmooth optimization
Cited In (20)
- Distributed stochastic nonsmooth nonconvex optimization
- Distributed nonconvex constrained optimization over time-varying digraphs
- Distributed primal-dual method on unbalanced digraphs with row stochasticity
- A distributed asynchronous method of multipliers for constrained nonconvex optimization
- An event-triggered collaborative neurodynamic approach to distributed global optimization
- DIMIX: Diminishing Mixing for Sloppy Agents
- Distributed Learning in Non-Convex Environments— Part II: Polynomial Escape From Saddle-Points
- Decentralized nonconvex optimization with guaranteed privacy and accuracy
- A simple framework for stability analysis of state-dependent networks of heterogeneous agents
- Title not available (Why is that?)
- A collective neurodynamic penalty approach to nonconvex distributed constrained optimization
- Generalized left-localized Cayley parametrization for optimization with orthogonality constraints
- Nonconvex Optimization for Communication Networks
- Decentralized dictionary learning over time-varying digraphs
- Zeroth-order algorithms for stochastic distributed nonconvex optimization
- One dimensional consensus based algorithm for non-convex optimization
- Second-order guarantees of distributed gradient algorithms
- Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity
- Measurement-based efficient resource allocation with demand-side adjustments
- Stochastic learning in multi-agent optimization: communication and payoff-based approaches
This page was built for publication: Non-Convex Distributed Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4589436)