Parallel Selective Algorithms for Nonconvex Big Data Optimization
From MaRDI portal
Publication:4580494
DOI10.1109/TSP.2015.2399858zbMATH Open1394.94174arXiv1402.5521OpenAlexW2058361915MaRDI QIDQ4580494FDOQ4580494
Gesualdo Scutari, Francisco Facchinei, Simone Sagratella
Publication date: 22 August 2018
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Abstract: We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e., sequential) ones, as well as virtually all possibilities "in between" with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.
Full work available at URL: https://arxiv.org/abs/1402.5521
Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Nonconvex programming, global optimization (90C26)
Cited In (25)
- On the solution of monotone nested variational inequalities
- Parallel coordinate descent methods for big data optimization
- Combining approximation and exact penalty in hierarchical programming
- A framework for parallel and distributed training of neural networks
- Distributed nonconvex constrained optimization over time-varying digraphs
- Distributed algorithms for convex problems with linear coupling constraints
- Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation
- A fast active set block coordinate descent algorithm for \(\ell_1\)-regularized least squares
- Decentralized Dictionary Learning Over Time-Varying Digraphs
- Scalable subspace methods for derivative-free nonlinear least-squares optimization
- Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity
- A flexible coordinate descent method
- Asynchronous parallel algorithms for nonconvex optimization
- Localization and approximations for distributed non-convex optimization
- Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
- Distributed semi-supervised support vector machines
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
- Title not available (Why is that?)
- Title not available (Why is that?)
- An adaptive partial linearization method for optimization problems on product sets
- Feasible methods for nonconvex nonsmooth problems with applications in green communications
- Distributed optimization methods for nonconvex problems with inequality constraints over time-varying networks
- Computing B-Stationary Points of Nonsmooth DC Programs
- Title not available (Why is that?)
- A framework for parallel second order incremental optimization algorithms for solving partially separable problems
This page was built for publication: Parallel Selective Algorithms for Nonconvex Big Data Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4580494)