Locally sparse reconstruction using the ^1,-norm
From MaRDI portal
Publication:256102
Abstract: This paper discusses the incorporation of local sparsity information, e.g. in each pixel of an image, via minimization of the -norm. We discuss the basic properties of this norm when used as a regularization functional and associated optimization problems, for which we derive equivalent reformulations either more amenable to theory or to numerical computation. Further focus of the analysis is put on the locally 1-sparse case, which is well motivated by some biomedical imaging applications. Our computational approaches are based on alternating direction methods of multipliers (ADMM) and appropriate splittings with augmented Lagrangians. Those are tested for a model scenario related to dynamic positron emission tomography (PET), which is a functional imaging technique in nuclear medicine. The results of this paper provide insight into the potential impact of regularization with the -norm for local sparsity in appropriate settings. However, it also indicates several shortcomings, possibly related to the non-tightness of the functional as a relaxation of the -norm.
Recommendations
- Reconstruction using local sparsity. A novel regularization technique and an asymptotic analysis of spatial sparsity priors
- ADMM-EM method for \(L_1\)-norm regularized weighted least squares PET reconstruction
- Minimizing \(L_1\) over \(L_2\) norms on the gradient
- Composite SAR imaging using sequential joint sparsity
- A note on the minimization of a Tikhonov functional with \(\ell^1\)-penalty
Cites work
- scientific article; zbMATH DE number 3833218 (Why is no real title available?)
- scientific article; zbMATH DE number 3551792 (Why is no real title available?)
- scientific article; zbMATH DE number 3574917 (Why is no real title available?)
- scientific article; zbMATH DE number 679861 (Why is no real title available?)
- scientific article; zbMATH DE number 1376935 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- (Nonlocal) total variation in medical imaging
- A Convex Model for Nonnegative Matrix Factorization and Dimensionality Reduction on Physical Space
- A Probabilistic and RIPless Theory of Compressed Sensing
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- A proximal-based deomposition method for compositions method for convex minimization problems
- Algorithms for simultaneous sparse approximation. II: Convex relaxation
- Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities
- An affine scaling methodology for best basis selection
- An iterative algorithm for nonlinear inverse problems with joint sparsity constraints in vector-valued regimes and an application to color image inpainting
- Application of the alternating direction method of multipliers to separable convex programming problems
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- Characterization of the subdifferential of some matrix norms
- Compressed sensing and best \(k\)-term approximation
- Convergence rates of convex variational regularization
- Decoding by Linear Programming
- Decomposition method with a variable parameter for a class of monotone variational inequality problems
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Greed is Good: Algorithmic Results for Sparse Approximation
- Just relax: convex programming methods for identifying sparse signals in noise
- Model Selection and Estimation in Regression with Grouped Variables
- Monotone Operators and the Proximal Point Algorithm
- On Sparse Representations in Arbitrary Redundant Bases
- On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization
- Optimization with sparsity-inducing penalties
- Reconstruction using local sparsity. A novel regularization technique and an asymptotic analysis of spatial sparsity priors
- Recovery Algorithms for Vector-Valued Data with Joint Sparsity Constraints
- Regression Shrinkage and Selection via The Lasso: A Retrospective
- Regularization methods in Banach spaces.
- Robust Matrix Decomposition With Sparse Corruptions
- Robust principal component analysis?
- Sparse regression using mixed norms
- Sparse solutions to linear inverse problems with multiple measurement vectors
- Structured sparsity through convex optimization
- Uncertainty principles and ideal atomic decomposition
Cited in
(5)- Collaborative total variation: a general framework for vectorial TV models
- Reconstruction using local sparsity. A novel regularization technique and an asymptotic analysis of spatial sparsity priors
- Modern regularization methods for inverse problems
- ORKA: Object reconstruction using a K-approximation graph
- The Benefits of Acting Locally: Reconstruction Algorithms for Sparse in Levels Signals With Stable and Robust Recovery Guarantees
This page was built for publication: Locally sparse reconstruction using the \(\ell^{1,\infty}\)-norm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q256102)