Swarming for faster convergence in stochastic optimization
From MaRDI portal
Publication:4582016
DOI10.1137/17M1111085zbMATH Open1401.90138arXiv1806.04207OpenAlexW2964263318MaRDI QIDQ4582016FDOQ4582016
Authors: Shi Pu, Alfredo García
Publication date: 22 August 2018
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Abstract: We study a distributed framework for stochastic optimization which is inspired by models of collective motion found in nature (e.g., swarming) with mild communication requirements. Specifically, we analyze a scheme in which each one of independent threads, implements in a distributed and unsynchronized fashion, a stochastic gradient-descent algorithm which is perturbed by a swarming potential. Assuming the overhead caused by synchronization is not negligible, we show the swarming-based approach exhibits better performance than a centralized algorithm (based upon the average of observations) in terms of (real-time) convergence speed. We also derive an error bound that is monotone decreasing in network size and connectivity. We characterize the scheme's finite-time performances for both convex and non-convex objective functions.
Full work available at URL: https://arxiv.org/abs/1806.04207
Recommendations
- A flocking-based approach for distributed stochastic optimization
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed stochastic algorithm for global optimization in networked system
- A consensus-based model for global optimization and its mean-field limit
- Optimal convergence rates for convex distributed optimization in networks
Convex programming (90C25) Analysis of algorithms and problem complexity (68Q25) Stochastic programming (90C15)
Cites Work
- Title not available (Why is that?)
- A Stochastic Approximation Method
- Robust Stochastic Approximation Approach to Stochastic Programming
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- A Model Reference Adaptive Search Method for Global Optimization
- Stochastic Estimation of the Maximum of a Regression Function
- A class of attractions/repulsion functions for stable swarm aggregations
- Design and analysis of simulation experiments
- Swarm stability and optimization
- Adaptive Penalty-Based Distributed Stochastic Convex Optimization
- Noise Reduction by Swarming in Social Foraging
- A Flocking-Based Approach for Distributed Stochastic Optimization
Cited In (8)
- Erratum: Swarming for Faster Convergence in Stochastic Optimization
- Title not available (Why is that?)
- Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
- Distributed stochastic gradient tracking methods
- A swarm-inspired projection algorithm
- Learning to Optimize
- Decentralized coordination of autonomous swarms using parallel Gibbs sampling
- Safe Metropolis-Hastings algorithm and its application to swarm control
Uses Software
This page was built for publication: Swarming for faster convergence in stochastic optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4582016)