Minimization of $\ell_{1-2}$ for Compressed Sensing

From MaRDI portal
Publication:5251929


DOI10.1137/140952363zbMath1316.90037MaRDI QIDQ5251929

Penghang Yin, Yifei Lou, Qi He, Jack X. Xin

Publication date: 21 May 2015

Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/140952363


90C26: Nonconvex programming, global optimization

49M29: Numerical methods involving duality

65K10: Numerical optimization and variational techniques


Related Items

Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications, Block sparse signal recovery via minimizing the block \(q\)-ratio sparsity, Block-sparse recovery and rank minimization using a weighted \(l_p-l_q\) model, Sparse signal reconstruction via collaborative neurodynamic optimization, Accelerated sparse recovery via gradient descent with nonlinear conjugate gradient momentum, Sorted \(L_1/L_2\) minimization for sparse signal recovery, A wonderful triangle in compressed sensing, Huberization image restoration model from incomplete multiplicative noisy data, Convergence rate analysis of an extrapolated proximal difference-of-convex algorithm, Structured model selection via ℓ1−ℓ2 optimization, \(\boldsymbol{L_1-\beta L_q}\) Minimization for Signal and Image Recovery, Nonconvex \(\ell_p-\alpha\ell_q\) minimization method and \(p\)-RIP condition for stable recovery of approximately \(k\)-sparse signals, Inexact proximal DC Newton-type method for nonconvex composite functions, A non-convex piecewise quadratic approximation of \(\ell_0\) regularization: theory and accelerated algorithm, Enhanced total variation minimization for stable image reconstruction, A reduced half thresholding algorithm, A three-operator splitting algorithm with deviations for generalized DC programming, A variable metric and Nesterov extrapolated proximal DCA with backtracking for a composite DC program, \(k\)-sparse vector recovery via truncated \(\ell_1 -\ell_2\) local minimization, Proximal gradient method with extrapolation and line search for a class of non-convex and non-smooth problems, Open issues and recent advances in DC programming and DCA, An iDCA with sieving strategy for PDE-constrained optimization problems with \(L^{1-2}\)-control cost, Unnamed Item, Composite Difference-Max Programs for Modern Statistical Estimation Problems, Compressed Sensing with Sparse Corruptions: Fault-Tolerant Sparse Collocation Approximations, Sliced-Inverse-Regression--Aided Rotated Compressive Sensing Method for Uncertainty Quantification, Decomposition Methods for Computing Directional Stationary Solutions of a Class of Nonsmooth Nonconvex Optimization Problems, Unnamed Item, On image restoration from random sampling noisy frequency data with regularization, Sparse Solutions by a Quadratically Constrained ℓq (0 <q< 1) Minimization Model, Sparse Polynomial Chaos Expansions: Literature Survey and Benchmark, An image sharpening operator combined with framelet for image deblurring, 1αℓ 2 minimization methods for signal and image reconstruction with impulsive noise removal, A Three-Operator Splitting Algorithm for Nonconvex Sparsity Regularization, Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems, The Dantzig selector: recovery of signal via ℓ 1 − αℓ 2 minimization, A new sufficient condition for sparse vector recovery via ℓ1 − ℓ2 local minimization, A General Framework of Rotational Sparse Approximation in Uncertainty Quantification, Minimization of $L_1$ Over $L_2$ for Sparse Signal Recovery with Convergence Guarantee, Image Segmentation via Fischer-Burmeister Total Variation and Thresholding, An Iterative Reduction FISTA Algorithm for Large-Scale LASSO, Stable Image Reconstruction Using Transformed Total Variation Minimization, Smoothing techniques and difference of convex functions algorithms for image reconstructions, Robust recovery of signals with partially known support information using weighted BPDN, A projected gradient method for αℓ 1 − βℓ 2 sparsity regularization **, Non-convex Optimization via Strongly Convex Majorization-minimization, A Scale-Invariant Approach for Sparse Signal Recovery, $ \newcommand{\e}{{\rm e}} {\alpha\ell_{1}-\beta\ell_{2}}$ regularization for sparse recovery, An Efficient Proximal Block Coordinate Homotopy Method for Large-Scale Sparse Least Squares Problems, Minimization of the difference of Nuclear and Frobenius norms for noisy low rank matrix recovery, BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights, Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity, Penalty Methods for a Class of Non-Lipschitz Optimization Problems, Algorithmic versatility of SPF-regularization methods, Weighted lp − l1 minimization methods for block sparse recovery and rank minimization, New Restricted Isometry Property Analysis for $\ell_1-\ell_2$ Minimization Methods, Limited-Angle CT Reconstruction via the $L_1/L_2$ Minimization, A Weighted Difference of Anisotropic and Isotropic Total Variation for Relaxed Mumford--Shah Color and Multiphase Image Segmentation, Robust signal recovery for ℓ 1–2 minimization via prior support information, Efficient Boosted DC Algorithm for Nonconvex Image Restoration with Rician Noise, An efficient semismooth Newton method for adaptive sparse signal recovery problems, Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines, Point source super-resolution via non-convex \(L_1\) based methods, Variational multiplicative noise removal by DC programming, Alternating direction method of multipliers with difference of convex functions, \(l_1\)-\(l_2\) regularization of split feasibility problems, Enhancing matrix completion using a modified second-order total variation, A new piecewise quadratic approximation approach for \(L_0\) norm minimization problem, Fast L1-L2 minimization via a proximal operator, A proximal difference-of-convex algorithm with extrapolation, DC programming and DCA: thirty years of developments, Minimization of transformed \(L_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing, Analysis of the ratio of \(\ell_1\) and \(\ell_2\) norms in compressed sensing, A preconditioning approach for improved estimation of sparse polynomial chaos expansions, Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations, Image restoration based on fractional-order model with decomposition: texture and cartoon, Perturbation analysis of \(L_{1-2}\) method for robust sparse recovery, A Lagrange-Newton algorithm for sparse nonlinear programming, On the grouping effect of the \(l_{1-2}\) models, The proximity operator of the log-sum penalty, Semi-supervised learning-assisted imaging method for electrical capacitance tomography, A unified Douglas-Rachford algorithm for generalized DC programming, Unconstrained \(\ell_1\)-\(\ell_2\) minimization for sparse recovery via mutual coherence, Low-rank matrix recovery with Ky Fan 2-\(k\)-norm, An inexact successive quadratic approximation method for a class of difference-of-convex optimization problems, Nonconvex regularization for blurred images with Cauchy noise, Two sufficient descent three-term conjugate gradient methods for unconstrained optimization problems with applications in compressive sensing, Explicit lower bounds on \(|L (1, \chi)|\), Kurdyka-Łojasiewicz exponent via inf-projection, Gradient projection Newton pursuit for sparsity constrained optimization, The springback penalty for robust signal recovery, A solution approach for cardinality minimization problem based on fractional programming, A nonconvex \(l_1 (l_1-l_2)\) model for image restoration with impulse noise, Smoothing inertial projection neural network for minimization \(L_{p-q}\) in sparse signal reconstruction, Transformed \(\ell_1\) regularization for learning sparse deep neural networks, Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm, A new nonconvex approach for image restoration with Gamma noise, A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations, Relating \(\ell_p\) regularization and reweighted \(\ell_1\) regularization, RIP-based performance guarantee for low-tubal-rank tensor recovery, A hybrid Bregman alternating direction method of multipliers for the linearly constrained difference-of-convex problems, Divide and conquer: an incremental sparsity promoting compressive sampling approach for polynomial chaos expansions, The modified second APG method for DC optimization problems, Further properties of the forward-backward envelope with applications to difference-of-convex programming, Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing, A refined convergence analysis of \(\mathrm{pDCA}_{e}\) with applications to simultaneous sparse recovery and outlier detection, A novel regularization based on the error function for sparse recovery, Newton method for \(\ell_0\)-regularized optimization, A proximal algorithm with backtracked extrapolation for a class of structured fractional programming, A necessary and sufficient condition for sparse vector recovery via \(\ell_1-\ell_2\) minimization, Robust signal recovery via \(\ell_{1-2}/ \ell_p\) minimization with partially known support, Convergence rate analysis of proximal iteratively reweighted \(\ell_1\) methods for \(\ell_p\) regularization problems, Low rank matrix minimization with a truncated difference of nuclear norm and Frobenius norm regularization, Retraction-based first-order feasible methods for difference-of-convex programs with smooth inequality and simple geometric constraints, Conic formulation of QPCCs applied to truly sparse QPs, Morozov's discrepancy principle for \(\alpha\ell_1-\beta\ell_2\) sparsity regularization, A DC Programming Approach to the Continuous Equilibrium Network Design Problem, Sparse Approximation using $\ell_1-\ell_2$ Minimization and Its Application to Stochastic Collocation, Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization, A Weighted Difference of Anisotropic and Isotropic Total Variation Model for Image Processing, Heuristic discrepancy principle for variational regularization of inverse problems


Uses Software


Cites Work