Risk bounds for model selection via penalization (Q1291160): Difference between revisions
From MaRDI portal
Created a new Item |
Added link to MaRDI item. |
||
links / mardi / name | links / mardi / name | ||
Revision as of 10:43, 31 January 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Risk bounds for model selection via penalization |
scientific article |
Statements
Risk bounds for model selection via penalization (English)
0 references
3 June 1999
0 references
The authors develop performance bounds for criteria of model selection, using recent theory for sieves. The model selection criteria are based on an empirical loss or contrast function with an added penalty term roughly proportional to the number of parameters needed to describe the model divided by the number of observations. Most of the presented examples involve density or regression estimation settings, and the authors focus on the problem of estimating the unknown density or regression function. It is shown that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve. The connection between model selection via penalization and adaptation in the minimax sense is pointed out. Such illustrations of the introduced method as penalized maximum likelihood, projection or least squares estimation are provided. The models involve commonly used finite dimensional expansions such as piecewise polynomials with fixed or variable knots, trigonometric polynomials, wavelets, neural nets, and related nonlinear expansions defined by superposition of ridge functions.
0 references
sieves
0 references
adaptation
0 references
penalized maximum likelihood
0 references
projection
0 references
least squares estimation
0 references