Adaptive multi-penalty regularization based on a generalized Lasso path
DOI10.1016/J.ACHA.2018.11.001zbMATH Open1434.68408arXiv1710.03971OpenAlexW2964065098WikidataQ125685045 ScholiaQ125685045MaRDI QIDQ2175012FDOQ2175012
Timo Klock, Valeriya Naumova, Markus Grasmair
Publication date: 27 April 2020
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1710.03971
Recommendations
- Conditions on optimal support recovery in unmixing problems by means of multi-penalty regularization
- Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices
- A nonconvex penalization algorithm with automatic choice of the regularization parameter in sparse imaging
- Adaptive regularization using the entire solution surface
- A new framework for multi-parameter regularization
compressed sensingadaptive parameter choicemulti-penalty regularizationLasso pathexact support recoverynoise folding
Learning and adaptive systems in artificial intelligence (68T05) Ridge regression; shrinkage estimators (Lasso) (62J07) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Cites Work
- Least angle regression. (With discussion)
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Regularization and Variable Selection Via the Elastic Net
- Robust Estimation of a Location Parameter
- Piecewise linear regularized solution paths
- The Lasso problem and uniqueness
- Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
- Iterative hard thresholding for compressed sensing
- Preconditioning the Lasso for sign consistency
- Oscillating patterns in image processing and nonlinear evolution equations. The fifteenth Dean Jacqueline B. Lewis memorial lectures
- Necessary and sufficient conditions for linear convergence of ℓ1-regularization
- A mathematical introduction to compressive sensing
- Greed is Good: Algorithmic Results for Sparse Approximation
- Uncertainty principles and ideal atomic decomposition
- On the consistency of feature selection using greedy least squares regression
- Uncertainty Principles and Signal Recovery
- Title not available (Why is that?)
- Information Theoretic Bounds for Compressed Sensing
- Hard thresholding pursuit algorithms: number of iterations
- Sparse recovery via differential inclusions
- Sparsity-enforcing regularisation and ISTA revisited
- Conditions on optimal support recovery in unmixing problems by means of multi-penalty regularization
- Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices
- Damping Noise-Folding and Enhanced Support Recovery in Compressed Sensing
Cited In (3)
This page was built for publication: Adaptive multi-penalty regularization based on a generalized Lasso path
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2175012)