Distributed optimization with arbitrary local solvers
From MaRDI portal
Publication:4594835
DOI10.1080/10556788.2016.1278445zbMath1419.68214arXiv1512.04039OpenAlexW2199097987MaRDI QIDQ4594835
Virginia Smith, Peter Richtárik, Martin Jaggi, Chenxin Ma, Martin Takáč, Jakub Konečný, Michael I. Jordan
Publication date: 24 November 2017
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1512.04039
Analysis of algorithms (68W40) Learning and adaptive systems in artificial intelligence (68T05) Parallel algorithms in computer science (68W10) Randomized algorithms (68W20) Distributed algorithms (68W15)
Related Items
Stochastic distributed learning with gradient quantization and double-variance reduction, An attention algorithm for solving large scale structured \(l_0\)-norm penalty estimation problems, Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization, Unnamed Item, Improved asynchronous parallel optimization analysis for stochastic incremental methods, A Distributed Flexible Delay-Tolerant Proximal Gradient Algorithm, Distributed optimization for degenerate loss functions arising from over-parameterization, Unnamed Item, An accelerated communication-efficient primal-dual optimization framework for structured machine learning