Analysis and algorithms for some compressed sensing models based on L1/L2 minimization
DOI10.1137/20M1355380zbMATH Open1470.90098arXiv2007.12821MaRDI QIDQ4997175FDOQ4997175
Authors: Liaoyuan Zeng, Peiran Yu, Ting Kei Pong
Publication date: 28 June 2021
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2007.12821
Recommendations
- Minimization of \(\ell_{1-2}\) for compressed sensing
- Minimization of \(L_1\) over \(L_2\) for sparse signal recovery with convergence guarantee
- A gradient based method for the \(L_{2}-L_{1/2}\) minimization and application to compressive sensing
- Computational Aspects of Constrained L 1-L 2 Minimization for Compressive Sensing
- Alternating direction algorithms for \(\ell_1\)-problems in compressive sensing
Methods of successive quadratic programming type (90C55) Fractional programming (90C32) Applications of mathematical programming (90C90) Nonconvex programming, global optimization (90C26)
Cites Work
- Probing the Pareto frontier for basis pursuit solutions
- Variational Analysis
- On Nonlinear Fractional Programming
- Decoding by Linear Programming
- Title not available (Why is that?)
- Stable signal recovery from incomplete and inaccurate measurements
- Convex Analysis
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Techniques of variational analysis
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Sparse Approximate Solutions to Linear Systems
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Atomic decomposition by basis pursuit
- Convex analysis and global optimization
- Convex analysis and nonlinear optimization. Theory and examples.
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- A moving balls approximation method for a class of smooth constrained minimization problems
- A dual method for minimizing a nonsmooth objective over one smooth inequality constraint
- Computing sparse representation in a highly coherent dictionary based on difference of \(L_1\) and \(L_2\)
- Theory of compressive sensing via \(\ell_1\)-minimization: a non-RIP analysis and extensions
- Error bounds for systems of lower semicontinuous functions in Asplund spaces
- Ratio and difference of \(l_1\) and \(l_2\) norms and sparse representation with coherent dictionaries
- Penalty methods for a class of non-Lipschitz optimization problems
- Revisiting Dinkelbach-type algorithms for generalized fractional programs
- The multiproximal linearization method for convex composite problems
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs
- Proximal-gradient algorithms for fractional programming
- A proximal difference-of-convex algorithm with extrapolation
- Limited-angle CT reconstruction via the \(L_1/L_2\) minimization
- Accelerated Schemes for the $L_1/L_2$ Minimization
- A Scale-Invariant Approach for Sparse Signal Recovery
- A refined convergence analysis of \(\mathrm{pDCA}_{e}\) with applications to simultaneous sparse recovery and outlier detection
- Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems
Cited In (11)
- Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing
- Extrapolated Proximal Subgradient Algorithms for Nonconvex and Nonsmooth Fractional Programs
- Approximation Algorithms for Model-Based Compressive Sensing
- Study on \(L_1\) over \(L_2\) Minimization for nonnegative signal recovery
- Computational Aspects of Constrained L 1-L 2 Minimization for Compressive Sensing
- A gradient based method for the \(L_{2}-L_{1/2}\) minimization and application to compressive sensing
- Sparse recovery: the square of \(\ell_1/\ell_2\) norms
- Minimization of \(L_1\) over \(L_2\) for sparse signal recovery with convergence guarantee
- Retraction-based first-order feasible methods for difference-of-convex programs with smooth inequality and simple geometric constraints
- Sorted \(L_1/L_2\) minimization for sparse signal recovery
- Analysis of the ratio of \(\ell_1\) and \(\ell_2\) norms in compressed sensing
Uses Software
This page was built for publication: Analysis and algorithms for some compressed sensing models based on L1/L2 minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4997175)