Augmented _1 and nuclear-norm models with a globally linearly convergent algorithm
From MaRDI portal
Publication:2873229
Abstract: This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of , where is a vector, as well as the minimization of , where is a matrix and and are the nuclear and Frobenius norms of , respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing and under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector , minimizing returns (nearly) the same solution as minimizing almost whenever . The same relation also holds between minimizing and minimizing for recovering a (nearly) low-rank matrix , if . Furthermore, we show that the linearized Bregman algorithm for minimizing subject to enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on . To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.
Recommendations
- Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization
- On first-order algorithms for \(\ell_{1}/\)nuclear norm minimization
- An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems
- Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
Cited in
(36)- A new piecewise quadratic approximation approach for \(L_0\) norm minimization problem
- Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Proximal linearization methods for Schatten \(p\)-quasi-norm minimization
- Local linear convergence of a primal-dual algorithm for the augmented convex models
- Low-rank matrix recovery via regularized nuclear norm minimization
- Iterative methods based on soft thresholding of hierarchical tensors
- Variance reduction for root-finding problems
- Optimality conditions and numerical algorithms for a class of linearly constrained minimax optimization problems
- Cardinality minimization, constraints, and regularization: a survey
- Extragradient and extrapolation methods with generalized Bregman distances for saddle point problems
- Sparse recovery via differential inclusions
- A time continuation based fast approximate algorithm for compressed sensing related optimization
- On first-order algorithms for \(\ell_{1}/\)nuclear norm minimization
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications
- Stability of the elastic net estimator
- Asynchronous stochastic coordinate descent: parallelism and convergence properties
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Linear convergence of the randomized sparse Kaczmarz method
- Sparse + low-energy decomposition for viscous conservation laws
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Revisiting linearized Bregman iterations under Lipschitz-like convexity condition
- Eventual linear convergence of the Douglas-Rachford iteration for basis pursuit
- Acceleration and restart for the randomized Bregman-Kaczmarz method
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth
- Regularized Kaczmarz Algorithms for Tensor Recovery
- An improved algorithm for basis pursuit problem and its applications
- On the convergence of asynchronous parallel iteration with unbounded delays
- On the convergence of decentralized gradient descent
- Sparse sampling Kaczmarz–Motzkin method with linear convergence
- Projected shrinkage algorithm for box-constrained \(\ell _1\)-minimization
- A flexible ADMM algorithm for big data applications
- Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems
- Linear convergence of descent methods for the unconstrained minimization of restricted strongly convex functions
- Redundancy techniques for straggler mitigation in distributed optimization and learning
This page was built for publication: Augmented \(\ell_1\) and nuclear-norm models with a globally linearly convergent algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2873229)