Parallel Selective Algorithms for Nonconvex Big Data Optimization
From MaRDI portal
Publication:4580494
Abstract: We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e., sequential) ones, as well as virtually all possibilities "in between" with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.
Cited in
(25)- Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation
- A class of parallel doubly stochastic algorithms for large-scale learning
- A framework for parallel and distributed training of neural networks
- Computing B-stationary points of nonsmooth DC programs
- Scalable subspace methods for derivative-free nonlinear least-squares optimization
- Distributed optimization methods for nonconvex problems with inequality constraints over time-varying networks
- Distributed nonconvex constrained optimization over time-varying digraphs
- An adaptive partial linearization method for optimization problems on product sets
- A framework for parallel second order incremental optimization algorithms for solving partially separable problems
- A fast active set block coordinate descent algorithm for \(\ell_1\)-regularized least squares
- Distributed semi-supervised support vector machines
- Asynchronous stochastic coordinate descent: parallelism and convergence properties
- Decentralized dictionary learning over time-varying digraphs
- Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity
- On the solution of monotone nested variational inequalities
- A distributed block coordinate descent method for training \(l_1\) regularized linear classifiers
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
- Parallel coordinate descent methods for big data optimization
- A flexible coordinate descent method
- Distributed algorithms for convex problems with linear coupling constraints
- scientific article; zbMATH DE number 7626711 (Why is no real title available?)
- Localization and approximations for distributed non-convex optimization
- Asynchronous parallel algorithms for nonconvex optimization
- Combining approximation and exact penalty in hierarchical programming
- Feasible methods for nonconvex nonsmooth problems with applications in green communications
This page was built for publication: Parallel Selective Algorithms for Nonconvex Big Data Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4580494)