Improved iteratively reweighted least squares for unconstrained smoothed _q minimization
DOI10.1137/110840364zbMATH Open1268.49038OpenAlexW2031906930MaRDI QIDQ2840384FDOQ2840384
Authors: Ming-Jun Lai, Wotao Yin, Yangyang Xu
Publication date: 18 July 2013
Published in: SIAM Journal on Numerical Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/110840364
Recommendations
- Iteratively reweighted least squares minimization for sparse recovery
- Sparse recovery by the iteratively reweighted \(\ell_1\) algorithm for elastic \(\ell_2-\ell_q\) minimization
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Iterative re-weighted least squares algorithm for \(l_p\)-minimization with tight frame and \(0 < p \leq 1\)
- Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery
iteratively reweighted least squaresrecovery of low-rank matricesrecovery of sparse vectorsunconstrained \(\ell_q\) minimization
Numerical optimization and variational techniques (65K10) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Numerical methods based on necessary conditions (49M05) Acceleration of convergence in numerical analysis (65B99)
Cited In (only showing first 100 items - show all)
- Newton method for \(\ell_0\)-regularized optimization
- Approximate versions of proximal iteratively reweighted algorithms including an extended IP-ICMM for signal and image processing problems
- The \(\ell_{2,p}\) regularized total variation with overlapping group sparsity prior for image restoration with impulse noise
- Gradient projection Newton pursuit for sparsity constrained optimization
- A smoothing proximal gradient algorithm for matrix rank minimization problem
- A non-convex piecewise quadratic approximation of \(\ell_0\) regularization: theory and accelerated algorithm
- On a general smoothly truncated regularization for variational piecewise constant image restoration: construction and convergent algorithms
- On the Schatten \(p\)-quasi-norm minimization for low-rank matrix recovery
- Proximal linearization methods for Schatten \(p\)-quasi-norm minimization
- Convergence and stability analysis of iteratively reweighted least squares for noisy block sparse recovery
- Numerical identification of a sparse Robin coefficient
- Iterative reweighted methods for \(\ell _1-\ell _p\) minimization
- DC approximation approach for \(\ell_0\)-minimization in compressed sensing
- Sparse signal recovery by accelerated \(\ell_q\) \((0<q<1)\) thresholding algorithm
- Equivalent Lipschitz surrogates for zero-norm and rank optimization problems
- Non-Lipschitz models for image restoration with impulse noise removal
- Weighted \(l_p- l_1\) minimization methods for block sparse recovery and rank minimization
- A null-space-based weightedl1minimization approach to compressed sensing
- Effective two-stage image segmentation: a new non-Lipschitz decomposition approach with convergent algorithm
- A general framework of rotational sparse approximation in uncertainty quantification
- Iterative re-weighted least squares algorithm for \(l_p\)-minimization with tight frame and \(0 < p \leq 1\)
- Robust recovery of signals with partially known support information using weighted BPDN
- Smoothing inertial neurodynamic approach for sparse signal reconstruction via \(L_p\)-norm minimization
- Low-rank matrix recovery via regularized nuclear norm minimization
- A unified primal dual active set algorithm for nonconvex sparse recovery
- Sparse Solutions by a Quadratically Constrained ℓq (0 <q< 1) Minimization Model
- Recovery of seismic wavefields by an \(l_{q}\)-norm constrained regularization method
- Optimal RIP bounds for sparse signals recovery via \(\ell_p\) minimization
- An interior stochastic gradient method for a class of non-Lipschitz optimization problems
- Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing
- A Scale-Invariant Approach for Sparse Signal Recovery
- Multi-competitive viruses over time-varying networks with mutations and human awareness
- \(\ell_1-\alpha\ell_2\) minimization methods for signal and image reconstruction with impulsive noise removal
- The springback penalty for robust signal recovery
- A novel dictionary learning method based on total least squares approach with application in high dimensional biological data
- Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods
- Online Schatten quasi-norm minimization for robust principal component analysis
- An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing
- A new method based on the manifold-alternative approximating for low-rank matrix completion
- Normal Cones Intersection Rule and Optimality Analysis for Low-Rank Matrix Optimization with Affine Manifolds
- Several classes of stationary points for rank regularized minimization problems
- Image retinex based on the nonconvex TV-type regularization
- A nonconvex approach to low-rank matrix completion using convex optimization.
- A nonconvex truncated regularization and box-constrained model for CT reconstruction
- Minimization of the difference of nuclear and Frobenius norms for noisy low rank matrix recovery
- RIP-based performance guarantee for low-tubal-rank tensor recovery
- A new hybrid \(l_p\)-\(l_2\) model for sparse solutions with applications to image processing
- Smoothing strategy along with conjugate gradient algorithm for signal reconstruction
- Analysis of the ratio of \(\ell_1\) and \(\ell_2\) norms in compressed sensing
- Sparse approximation using \(\ell_1-\ell_2\) minimization and its application to stochastic collocation
- A globally convergent algorithm for a class of gradient compounded non-Lipschitz models applied to non-additive noise removal
- Error bounds for rank constrained optimization problems and applications
- Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery
- Parallel matrix factorization for low-rank tensor completion
- An alternating direction method with continuation for nonconvex low rank minimization
- A new globally convergent algorithm for non-Lipschitz \(\ell_{p}-\ell_q\) minimization
- Enhancing matrix completion using a modified second-order total variation
- Exact minimum rank approximation via Schatten \(p\)-norm minimization
- Two-stage convex relaxation approach to low-rank and sparsity regularized least squares loss
- Entropy function-based algorithms for solving a class of nonconvex minimization problems
- Extrapolated smoothing descent algorithm for constrained nonconvex and nonsmooth composite problems
- Convergence analysis of projected gradient descent for Schatten-\(p\) nonconvex matrix recovery
- Isotropic non-Lipschitz regularization for sparse representations of random fields on the sphere
- A non-convex algorithm framework based on DC programming and DCA for matrix completion
- A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery
- A new piecewise quadratic approximation approach for \(L_0\) norm minimization problem
- On monotone and primal-dual active set schemes for \(\ell^p\)-type problems, \(p \in (0,1]\)
- Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems
- A singular value \(p\)-shrinkage thresholding algorithm for low rank matrix recovery
- Nonconvex sorted \(\ell_1\) minimization for sparse approximation
- Signal recovery under cumulative coherence
- Fast L1-L2 minimization via a proximal operator
- Sparse signal recovery with prior information by iterative reweighted least squares algorithm
- Minimization of transformed \(L_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing
- \(S_{1/2}\) regularization methods and fixed point algorithms for affine rank minimization problems
- Modulus-based iterative methods for constrained \(\ell_p\)-\(\ell_q\) minimization
- Robust sparse recovery via a novel convex model
- A Regularized Newton Method for \({\boldsymbol{\ell}}_{q}\) -Norm Composite Optimization Problems
- An efficient non-convex total variation approach for image deblurring and denoising
- Noisy matrix completion: understanding statistical guarantees for convex relaxation via nonconvex optimization
- A nonmonotone alternating updating method for a class of matrix factorization problems
- A Barzilai-Borwein-like iterative half thresholding algorithm for the \(L_{1/2}\) regularized problem
- Multistage convex relaxation approach to rank regularized minimization problems based on equivalent mathematical program with a generalized complementarity constraint
- Efficient regularized regression with \(L_0\) penalty for variable selection and network construction
- Low-rank factorization for rank minimization with nonconvex regularizers
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- An unconstrained \(\ell_q\) minimization with \(0<q\leq 1\) for sparse solution of underdetermined linear systems
- The Dantzig selector: recovery of signal via ℓ 1 − αℓ 2 minimization
- A rank-corrected procedure for matrix completion with fixed basis coefficients
- Conjugate gradient acceleration of iteratively re-weighted least squares methods
- Global optimality condition and fixed point continuation algorithm for non-Lipschitz \(\ell_p\) regularized matrix minimization
- A reweighted nuclear norm minimization algorithm for low rank matrix recovery
- Smoothing neural network for \(L_0\) regularized optimization problem with general convex constraints
- A joint matrix minimization approach for multi-image face recognition
- Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization
- A gradient descent based algorithm for \(\ell_p\) minimization
- A globally convergent algorithm for nonconvex optimization based on block coordinate update
- Point source super-resolution via non-convex \(L_1\) based methods
- Computational Aspects of Constrained L 1-L 2 Minimization for Compressive Sensing
- Computing sparse representation in a highly coherent dictionary based on difference of \(L_1\) and \(L_2\)
This page was built for publication: Improved iteratively reweighted least squares for unconstrained smoothed \(\ell_q\) minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2840384)