Model selection with low complexity priors
From MaRDI portal
Publication:4603697
Abstract: Regularization plays a pivotal role when facing the challenge of solving ill-posed inverse problems, where the number of observations is smaller than the ambient dimension of the object to be estimated. A line of recent work has studied regularization models with various types of low-dimensional structures. In such settings, the general approach is to solve a regularized optimization problem, which combines a data fidelity term and some regularization penalty that promotes the assumed low-dimensional/simple structure. This paper provides a general framework to capture this low-dimensional structure through what we coin partly smooth functions relative to a linear manifold. These are convex, non-negative, closed and finite-valued functions that will promote objects living on low-dimensional subspaces. This class of regularizers encompasses many popular examples such as the L1 norm, L1-L2 norm (group sparsity), as well as several others including the Linfty norm. We also show that the set of partly smooth functions relative to a linear manifold is closed under addition and pre-composition by a linear operator, which allows to cover mixed regularization, and the so-called analysis-type priors (e.g. total variation, fused Lasso, finite-valued polyhedral gauges). Our main result presents a unified sharp analysis of exact and robust recovery of the low-dimensional subspace model associated to the object to recover from partial measurements. This analysis is illustrated on a number of special and previously studied cases, and on an analysis of the performance of Linfty regularization in a compressed sensing scenario.
Recommendations
- Model Selection when There is "Minimal" Prior Information
- Model complexity and model priors
- scientific article; zbMATH DE number 1911043
- Model Selection Using the Minimum Description Length Principle
- Bayesian model selection using encompassing priors
- Model selection with vague prior information
- Model selection: a Lagrange optimization approach
- Model selection using mass-nonlocal prior
- Minimal penalties for Gaussian model selection
Cites work
- scientific article; zbMATH DE number 2155014 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- Active Sets, Nonsmoothness, and Sensitivity
- Analysis versus synthesis in signal priors
- Atomic Decomposition by Basis Pursuit
- Consistency of the group Lasso and multiple kernel learning
- Consistency of trace norm minimization
- Consistent group selection in high-dimensional linear regression
- Convergence rates of convex variational regularization
- Counting the faces of randomly-projected hypercubes and orthants, with applications
- Identifiable Surfaces in Constrained Optimization
- Linear convergence rates for Tikhonov regularization with positively homogeneous functionals
- Model Selection and Estimation in Regression with Grouped Variables
- Nonlinear total variation based noise removal algorithms
- On Sparse Representations in Arbitrary Redundant Bases
- On the Equivalence of Soft Wavelet Shrinkage, Total Variation Diffusion, Total Variation Regularization, and SIDEs
- Partial Smoothness, Tilt Stability, and Generalized Hessians
- Probability of unique integer solution to a system of linear equations
- Regularization and Variable Selection Via the Elastic Net
- Robust Sparse Analysis Regularization
- Shrinkage and variable selection by polytopes
- Simple bounds for recovering low-complexity models
- Some theoretical results on the grouped variables Lasso
- Sparsity and Smoothness Via the Fused Lasso
- The convex geometry of linear inverse problems
- The cosparse analysis model and algorithms
- The 𝒰-Lagrangian of a convex function
- Uncertainty Principles and Vector Quantization
- Uncertainty principles and ideal atomic decomposition
- Variational Analysis
Cited in
(22)- Model selection using mass-nonlocal prior
- Learning Regularization Parameter-Maps for Variational Image Reconstruction Using Deep Neural Networks and Algorithm Unrolling
- Local behavior of sparse analysis regularization: applications to risk estimation
- Sparsity-Inducing Nonconvex Nonseparable Regularization for Convex Image Processing
- Simplicity and model selection
- One condition for solution uniqueness and robustness of both \(\ell_1\)-synthesis and \(\ell_1\)-analysis minimizations
- The degrees of freedom of partly smooth regularizers
- Low complexity regularization of linear inverse problems
- Activity identification and local linear convergence of forward-backward-type methods
- Sharp oracle inequalities for low-complexity priors
- Sensitivity analysis for mirror-stratifiable convex functions
- The basins of attraction of the global minimizers of non-convex inverse problems with low-dimensional models in infinite dimension
- On debiasing restoration algorithms: applications to total-variation and nonlocal-means
- Local linear convergence of proximal coordinate descent algorithm
- Proximal gradient methods with adaptive subspace sampling
- Generalized conditional gradient with augmented Lagrangian for composite minimization
- Cosparsity in Compressed Sensing
- Few Paths, Fewer Words: Model Selection With Automatic Structure Functions
- Sampling from non-smooth distributions through Langevin diffusion
- Activity identification and local linear convergence of Douglas-Rachford/ADMM under partial smoothness
- A theory of optimal convex regularization for low-dimensional recovery
- scientific article; zbMATH DE number 7306914 (Why is no real title available?)
This page was built for publication: Model selection with low complexity priors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4603697)