Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
From MaRDI portal
Publication:4580706
Cited in
(11)- Parallel coordinate descent methods for big data optimization
- \(\mathrm{L_1RIP}\)-based robust compressed sensing
- Newton-like method with diagonal correction for distributed optimization
- A fast active set block coordinate descent algorithm for \(\ell_1\)-regularized least squares
- A flexible coordinate descent method
- Asynchronous parallel algorithms for nonconvex optimization
- A primal Douglas-Rachford splitting method for the constrained minimization problem in compressive sensing
- Decentralized dictionary learning over time-varying digraphs
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity
- A framework for parallel second order incremental optimization algorithms for solving partially separable problems
This page was built for publication: Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4580706)