Minimizing L₁ over L₂ norms on the gradient
From MaRDI portal
Publication:5076010
Abstract: In this paper, we study the L1/L2 minimization on the gradient for imaging applications. Several recent works have demonstrated that L1/L2 is better than the L1 norm when approximating the L0 norm to promote sparsity. Consequently, we postulate that applying L1/L2 on the gradient is better than the classic total variation (the L1 norm on the gradient) to enforce the sparsity of the image gradient. To verify our hypothesis, we consider a constrained formulation to reveal empirical evidence on the superiority of L1/L2 over L1 when recovering piecewise constant signals from low-frequency measurements. Numerically, we design a specific splitting scheme, under which we can prove subsequential and global convergence for the alternating direction method of multipliers (ADMM) under certain conditions. Experimentally, we demonstrate visible improvements of L1/L2 over L1 and other nonconvex regularizations for image recovery from low-frequency measurements and two medical applications of MRI and CT reconstruction. All the numerical results show the efficiency of our proposed approach.
Recommendations
- Minimization of \(L_1\) over \(L_2\) for sparse signal recovery with convergence guarantee
- Limited-angle CT reconstruction via the \(L_1/L_2\) minimization
- A Scale-Invariant Approach for Sparse Signal Recovery
- Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization
- New restricted isometry property analysis for \(\ell_1-\ell_2\) minimization methods
Cites work
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A Multiplicative Iterative Algorithm for Box-Constrained Penalized Likelihood Image Restoration
- A Scale-Invariant Approach for Sparse Signal Recovery
- A Total Fractional-Order Variation Model for Image Restoration with Nonhomogeneous Boundary Conditions and Its Numerical Solution
- A nonconvex model with minimax concave penalty for image restoration
- A unified approach to model selection and sparse recovery using regularized least squares
- A weighted difference of anisotropic and isotropic total variation model for image processing
- AIR tools -- a MATLAB package of algebraic iterative reconstruction methods
- Accelerated Schemes for the $L_1/L_2$ Minimization
- Decomposition of images by the anisotropic Rudin-Osher-Fatemi model
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Fractional-order total variation image denoising based on proximity algorithm
- Global convergence of splitting methods for nonconvex composite optimization
- IR tools: a MATLAB package of iterative regularization methods and large-scale test problems
- Likelihood-based selection and sharp parameter estimation
- Limited-angle CT reconstruction via the \(L_1/L_2\) minimization
- Minimization of transformed L₁ penalty: theory, difference of convex function algorithm, and robust application in compressed sensing
- Minimization of transformed \(l_1\) penalty: closed form representation and iterative thresholding algorithms
- Nearly unbiased variable selection under minimax concave penalty
- Nonlinear total variation based noise removal algorithms
- Point source super-resolution via non-convex \(L_1\) based methods
- Principles of computerized tomography imaging
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Sparse Approximate Solutions to Linear Systems
- Stable signal recovery from incomplete and inaccurate measurements
- Super-resolution from noisy data
- Super-resolution of positive sources: the discrete setup
- Superresolution via Sparsity Constraints
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Total generalized variation
- Towards a Mathematical Theory of Super‐resolution
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(13)- Locally sparse reconstruction using the \(\ell^{1,\infty}\)-norm
- A scale-invariant relaxation in low-rank tensor recovery with an application to tensor completion
- A stochastic ADMM algorithm for large-scale ptychography with weighted difference of anisotropic and isotropic total variation
- \(\boldsymbol{L_1-\beta L_q}\) Minimization for Signal and Image Recovery
- An efficient smoothing and thresholding image segmentation framework with weighted anisotropic-isotropic total variation
- When can \(l_p\)-norm objective functions be minimized via graph cuts?
- Poissonian image restoration via the \(L_1/L_2\)-based minimization
- Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications
- Combined \(\ell_{2}\) data and gradient fitting in conjunction with \(\ell_{1}\) regularization
- Efficient color image segmentation via quaternion-based \(L_1/L_2\) Regularization
- Minimization of \(L_1\) over \(L_2\) for sparse signal recovery with convergence guarantee
- Minimizing theL∞Norm of the Gradient with an Energy Constraint
- Sorted \(L_1/L_2\) minimization for sparse signal recovery
This page was built for publication: Minimizing \(L_1\) over \(L_2\) norms on the gradient
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5076010)