Parallel and distributed successive convex approximation methods for big-data optimization
From MaRDI portal
Publication:2312982
DOI10.1007/978-3-319-97142-1_3zbMATH Open1461.90101arXiv1805.06963OpenAlexW2803565980MaRDI QIDQ2312982FDOQ2312982
Authors: Gesualdo Scutari, Ying Sun
Publication date: 18 July 2019
Abstract: Recent years have witnessed a surge of interest in parallel and distributed optimization methods for large-scale systems. In particular, nonconvex large-scale optimization problems have found a wide range of applications in several engineering fields. The design and the analysis of such complex, large-scale, systems pose several challenges and call for the development of new optimization models and algorithms. The major contribution of this paper is to put forth a general, unified, algorithmic framework, based on Successive Convex Approximation (SCA) techniques, for the parallel and distributed solution of a general class of non-convex constrained (non-separable, networked) problems. The presented framework unifies and generalizes several existing SCA methods, making them appealing for a parallel/distributed implementation while offering a flexible selection of function approximants, step size schedules, and control of the computation/communication efficiency. This paper is organized according to the lectures that one of the authors delivered at the CIME Summer School on Centralized and Distributed Multi-agent Optimization Models and Algorithms, held in Cetraro, Italy, June 23--27, 2014. These lectures are: I) Successive Convex Approximation Methods: Basics; II) Parallel Successive Convex Approximation Methods; and III) Distributed Successive Convex Approximation Methods.
Full work available at URL: https://arxiv.org/abs/1805.06963
Recommendations
- Asynchronous parallel algorithms for nonconvex optimization
- A parallelizable augmented Lagrangian method applied to large-scale non-convex-constrained optimization problems
- Parallel coordinate descent methods for big data optimization
- scientific article; zbMATH DE number 1163103
- Distributed nonconvex constrained optimization over time-varying digraphs
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Approximation methods and heuristics in mathematical programming (90C59) Parallel algorithms in computer science (68W10)
Cited In (8)
- Parallel coordinate descent methods for big data optimization
- Distributed stochastic nonsmooth nonconvex optimization
- Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation
- A majorization-minimization algorithm for optimal sensor location in distributed parameter systems
- Decentralized dictionary learning over time-varying digraphs
- Second-order guarantees of distributed gradient algorithms
- Distributed variable sample-size gradient-response and best-response schemes for stochastic Nash equilibrium problems
- Convergence analysis of block majorize-minimize subspace approach
This page was built for publication: Parallel and distributed successive convex approximation methods for big-data optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2312982)