Distributed optimization with arbitrary local solvers
From MaRDI portal
Publication:4594835
Abstract: With the growth of data and necessity for distributed optimization methods, solvers that work well on a single machine must be re-designed to leverage distributed computation. Recent work in this area has been limited by focusing heavily on developing highly specific methods for the distributed environment. These special-purpose methods are often unable to fully leverage the competitive performance of their well-tuned and customized single machine counterparts. Further, they are unable to easily integrate improvements that continue to be made to single machine methods. To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods. We give strong primal-dual convergence rate guarantees for our framework that hold for arbitrary local solvers. We demonstrate the impact of local solver selection both theoretically and in an extensive experimental comparison. Finally, we provide thorough implementation details for our framework, highlighting areas for practical performance gains.
Recommendations
- scientific article; zbMATH DE number 6982986
- Distributed block-diagonal approximation methods for regularized empirical risk minimization
- Optimal convergence rates for convex distributed optimization in networks
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Block splitting for distributed optimization
Cited in
(18)- An attention algorithm for solving large scale structured \(l_0\)-norm penalty estimation problems
- Adaptivity of stochastic gradient methods for nonconvex optimization
- scientific article; zbMATH DE number 7306895 (Why is no real title available?)
- An accelerated communication-efficient primal-dual optimization framework for structured machine learning
- Distributed localized bi-objective search
- A distributed flexible delay-tolerant proximal gradient algorithm
- Communication-Aware Local Search for Distributed Constraint Optimization
- Local models-an approach to distributed multi-objective optimization
- Stochastic distributed learning with gradient quantization and double-variance reduction
- ADD-OPT: Accelerated Distributed Directed Optimization
- Improved asynchronous parallel optimization analysis for stochastic incremental methods
- Distributed optimization for degenerate loss functions arising from over-parameterization
- Hierarchical distributed optimization of constraint-coupled convex and mixed-integer programs using approximations of the dual function
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed block-diagonal approximation methods for regularized empirical risk minimization
- Distributed Optimization With Local Domains: Applications in MPC and Network Flows
- scientific article; zbMATH DE number 6982986 (Why is no real title available?)
- Optimal data splitting in distributed optimization for machine learning
This page was built for publication: Distributed optimization with arbitrary local solvers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4594835)