An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization

From MaRDI portal
Publication:2979326

DOI10.1109/TAC.2016.2525015zbMATH Open1359.90080arXiv1505.04824OpenAlexW2949585412MaRDI QIDQ2979326FDOQ2979326


Authors: Hamid Reza Feyzmahdavian, Arda Aytekin, Mikael Johansson Edit this on Wikidata


Publication date: 3 May 2017

Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)

Abstract: Mini-batch optimization has proven to be a powerful paradigm for large-scale learning. However, the state of the art parallel mini-batch algorithms assume synchronous operation or cyclic update orders. When worker nodes are heterogeneous (due to different computational capabilities or different communication delays), synchronous and cyclic operations are inefficient since they will leave workers idle waiting for the slower nodes to complete their computations. In this paper, we propose an asynchronous mini-batch algorithm for regularized stochastic optimization problems with smooth loss functions that eliminates idle waiting and allows workers to run at their maximal update rates. We show that by suitably choosing the step-size values, the algorithm achieves a rate of the order O(1/sqrtT) for general convex regularization functions, and the rate O(1/T) for strongly convex regularization functions, where T is the number of iterations. In both cases, the impact of asynchrony on the convergence rate of our algorithm is asymptotically negligible, and a near-linear speedup in the number of workers can be expected. Theoretical results are confirmed in real implementations on a distributed computing infrastructure.


Full work available at URL: https://arxiv.org/abs/1505.04824




Recommendations




Cited In (21)





This page was built for publication: An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2979326)