Model selection with low complexity priors
DOI10.1093/IMAIAI/IAV005zbMATH Open1386.94040arXiv1307.2342OpenAlexW2097717989MaRDI QIDQ4603697FDOQ4603697
Authors: Samuel Vaiter, Mohammad Golbabaee, Jalal Fadili, Gabriel Peyré
Publication date: 19 February 2018
Published in: Information and Inference: A Journal of the IMA (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1307.2342
Recommendations
- Model Selection when There is "Minimal" Prior Information
- Model complexity and model priors
- scientific article; zbMATH DE number 1911043
- Model Selection Using the Minimum Description Length Principle
- Bayesian model selection using encompassing priors
- Model selection with vague prior information
- Model selection: a Lagrange optimization approach
- Model selection using mass-nonlocal prior
- Minimal penalties for Gaussian model selection
total variationmodel selectionsparsityinverse problemscompressed sensingconvex regularizationpartial smoothness
Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Image processing (compression, reconstruction, etc.) in information and communication theory (94A08)
Cites Work
- Nonlinear total variation based noise removal algorithms
- Title not available (Why is that?)
- Atomic Decomposition by Basis Pursuit
- Variational Analysis
- Sparsity and Smoothness Via the Fused Lasso
- Regularization and Variable Selection Via the Elastic Net
- Model Selection and Estimation in Regression with Grouped Variables
- Consistency of the group Lasso and multiple kernel learning
- Consistent group selection in high-dimensional linear regression
- Linear convergence rates for Tikhonov regularization with positively homogeneous functionals
- On Sparse Representations in Arbitrary Redundant Bases
- Uncertainty principles and ideal atomic decomposition
- Convergence rates of convex variational regularization
- Some theoretical results on the grouped variables Lasso
- Consistency of trace norm minimization
- The cosparse analysis model and algorithms
- The convex geometry of linear inverse problems
- Analysis versus synthesis in signal priors
- Probability of unique integer solution to a system of linear equations
- Identifiable Surfaces in Constrained Optimization
- Title not available (Why is that?)
- The 𝒰-Lagrangian of a convex function
- Active Sets, Nonsmoothness, and Sensitivity
- Counting the faces of randomly-projected hypercubes and orthants, with applications
- Simple bounds for recovering low-complexity models
- On the Equivalence of Soft Wavelet Shrinkage, Total Variation Diffusion, Total Variation Regularization, and SIDEs
- Partial Smoothness, Tilt Stability, and Generalized Hessians
- Robust Sparse Analysis Regularization
- Uncertainty Principles and Vector Quantization
- Shrinkage and variable selection by polytopes
Cited In (22)
- Model selection using mass-nonlocal prior
- Learning Regularization Parameter-Maps for Variational Image Reconstruction Using Deep Neural Networks and Algorithm Unrolling
- Sparsity-Inducing Nonconvex Nonseparable Regularization for Convex Image Processing
- Local behavior of sparse analysis regularization: applications to risk estimation
- Simplicity and model selection
- One condition for solution uniqueness and robustness of both \(\ell_1\)-synthesis and \(\ell_1\)-analysis minimizations
- The degrees of freedom of partly smooth regularizers
- Low complexity regularization of linear inverse problems
- Activity identification and local linear convergence of forward-backward-type methods
- Sensitivity analysis for mirror-stratifiable convex functions
- The basins of attraction of the global minimizers of non-convex inverse problems with low-dimensional models in infinite dimension
- Sharp oracle inequalities for low-complexity priors
- On debiasing restoration algorithms: applications to total-variation and nonlocal-means
- Local linear convergence of proximal coordinate descent algorithm
- Proximal gradient methods with adaptive subspace sampling
- Generalized conditional gradient with augmented Lagrangian for composite minimization
- Cosparsity in Compressed Sensing
- Few Paths, Fewer Words: Model Selection With Automatic Structure Functions
- Sampling from non-smooth distributions through Langevin diffusion
- Activity identification and local linear convergence of Douglas-Rachford/ADMM under partial smoothness
- A theory of optimal convex regularization for low-dimensional recovery
- Title not available (Why is that?)
This page was built for publication: Model selection with low complexity priors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4603697)