The degrees of freedom of partly smooth regularizers (Q2409395): Difference between revisions
From MaRDI portal
Created a new Item |
Added link to MaRDI item. |
||
links / mardi / name | links / mardi / name | ||
Revision as of 20:32, 2 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | The degrees of freedom of partly smooth regularizers |
scientific article |
Statements
The degrees of freedom of partly smooth regularizers (English)
0 references
11 October 2017
0 references
A regression model is considered \[ \mathcal M (Y|X) = h(X\beta), \] where \(Y\) is the response vector, \(\beta\) is the unknown vector of regression parameters, \(X\) is the fixed design matrix, and \(h\) is a known smooth vector function. A class of estimators is introduced that are obtained by minimizing a general convex optimization problem with a regularizing penalty encoding a low-complexity prior. Based on the concept of partial smoothness, which encompasses popular examples including the LASSO, the group LASSO, the max and nuclear norms, the sensitivity of any solution to the optimization problem to small perturbations of \(Y\) is investigated. This allowes the authors to derive an analytical expression of the local variations of the estimators to perturbations of the observations, and also to show that the set where the estimator behaves non-smoothly as a function of the observations is of Lebesgue measure zero. Both results paved the way to derive unbiased estimators of the prediction risk in two scenarios, one of which covers the continuous exponential family. The presented results unify and go beyond those already known in the literature.
0 references
degrees of freedom
0 references
partial smothness
0 references
manifold
0 references
sparsity
0 references
model selection
0 references
O-minimal structures
0 references
semi-algebraic sets
0 references
group Lasso
0 references
total variation
0 references