Swarming for Faster Convergence in Stochastic Optimization

From MaRDI portal
Publication:4582016

DOI10.1137/17M1111085zbMATH Open1401.90138arXiv1806.04207OpenAlexW2964263318MaRDI QIDQ4582016FDOQ4582016

Alfredo García, Shi Pu

Publication date: 22 August 2018

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Abstract: We study a distributed framework for stochastic optimization which is inspired by models of collective motion found in nature (e.g., swarming) with mild communication requirements. Specifically, we analyze a scheme in which each one of N>1 independent threads, implements in a distributed and unsynchronized fashion, a stochastic gradient-descent algorithm which is perturbed by a swarming potential. Assuming the overhead caused by synchronization is not negligible, we show the swarming-based approach exhibits better performance than a centralized algorithm (based upon the average of N observations) in terms of (real-time) convergence speed. We also derive an error bound that is monotone decreasing in network size and connectivity. We characterize the scheme's finite-time performances for both convex and non-convex objective functions.


Full work available at URL: https://arxiv.org/abs/1806.04207





Cites Work


Cited In (6)

Uses Software






This page was built for publication: Swarming for Faster Convergence in Stochastic Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4582016)