Analysis and algorithms for some compressed sensing models based on L1/L2 minimization
From MaRDI portal
Publication:4997175
Abstract: Recently, in a series of papers [32,38,39,41], the ratio of and norms was proposed as a sparsity inducing function for noiseless compressed sensing. In this paper, we further study properties of such model in the noiseless setting, and propose an algorithm for minimizing / subject to noise in the measurements. Specifically, we show that the extended objective function (the sum of the objective and the indicator function of the constraint set) of the model in [32] satisfies the Kurdyka-Lojasiewicz (KL) property with exponent 1/2; this allows us to establish linear convergence of the algorithm proposed in [39, Eq. 11] under mild assumptions. We next extend the / model to handle compressed sensing problems with noise. We establish the solution existence for some of these models under the spherical section property [37,44], and extend the algorithm in [39, Eq. 11] by incorporating moving-balls-approximation techniques [4] for solving these problems. We prove the subsequential convergence of our algorithm under mild conditions, and establish global convergence of the whole sequence generated by our algorithm by imposing additional KL and differentiability assumptions on a specially constructed potential function. Finally, we perform numerical experiments on robust compressed sensing and basis pursuit denoising with residual error measured by norm or Lorentzian norm via solving the corresponding / models by our algorithm. Our numerical simulations show that our algorithm is able to recover the original sparse vectors with reasonable accuracy.
Recommendations
- Minimization of \(\ell_{1-2}\) for compressed sensing
- Minimization of \(L_1\) over \(L_2\) for sparse signal recovery with convergence guarantee
- A gradient based method for the \(L_{2}-L_{1/2}\) minimization and application to compressive sensing
- Computational Aspects of Constrained L 1-L 2 Minimization for Compressive Sensing
- Alternating direction algorithms for \(\ell_1\)-problems in compressive sensing
Cites work
- scientific article; zbMATH DE number 46303 (Why is no real title available?)
- A Scale-Invariant Approach for Sparse Signal Recovery
- A dual method for minimizing a nonsmooth objective over one smooth inequality constraint
- A moving balls approximation method for a class of smooth constrained minimization problems
- A proximal difference-of-convex algorithm with extrapolation
- A refined convergence analysis of \(\mathrm{pDCA}_{e}\) with applications to simultaneous sparse recovery and outlier detection
- Accelerated Schemes for the $L_1/L_2$ Minimization
- Atomic decomposition by basis pursuit
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Computing sparse representation in a highly coherent dictionary based on difference of L₁ and L₂
- Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convex Analysis
- Convex analysis and global optimization
- Convex analysis and nonlinear optimization. Theory and examples.
- Decoding by Linear Programming
- Error bounds for systems of lower semicontinuous functions in Asplund spaces
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Limited-angle CT reconstruction via the \(L_1/L_2\) minimization
- Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs
- On Nonlinear Fractional Programming
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Penalty methods for a class of non-Lipschitz optimization problems
- Probing the Pareto frontier for basis pursuit solutions
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Proximal-gradient algorithms for fractional programming
- Ratio and difference of \(l_1\) and \(l_2\) norms and sparse representation with coherent dictionaries
- Revisiting Dinkelbach-type algorithms for generalized fractional programs
- Sparse Approximate Solutions to Linear Systems
- Stable signal recovery from incomplete and inaccurate measurements
- Techniques of variational analysis
- The multiproximal linearization method for convex composite problems
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Theory of compressive sensing via _1-minimization: a non-RIP analysis and extensions
- Variational Analysis
Cited in
(11)- Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing
- Extrapolated Proximal Subgradient Algorithms for Nonconvex and Nonsmooth Fractional Programs
- Approximation Algorithms for Model-Based Compressive Sensing
- Study on \(L_1\) over \(L_2\) Minimization for nonnegative signal recovery
- Computational Aspects of Constrained L 1-L 2 Minimization for Compressive Sensing
- A gradient based method for the \(L_{2}-L_{1/2}\) minimization and application to compressive sensing
- Sparse recovery: the square of \(\ell_1/\ell_2\) norms
- Minimization of \(L_1\) over \(L_2\) for sparse signal recovery with convergence guarantee
- Retraction-based first-order feasible methods for difference-of-convex programs with smooth inequality and simple geometric constraints
- Analysis of the ratio of \(\ell_1\) and \(\ell_2\) norms in compressed sensing
- Sorted \(L_1/L_2\) minimization for sparse signal recovery
This page was built for publication: Analysis and algorithms for some compressed sensing models based on L1/L2 minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4997175)