Risk bounds for model selection via penalization (Q1291160)

From MaRDI portal





scientific article; zbMATH DE number 1295513
Language Label Description Also known as
default for all languages
No label defined
    English
    Risk bounds for model selection via penalization
    scientific article; zbMATH DE number 1295513

      Statements

      Risk bounds for model selection via penalization (English)
      0 references
      0 references
      0 references
      0 references
      3 June 1999
      0 references
      The authors develop performance bounds for criteria of model selection, using recent theory for sieves. The model selection criteria are based on an empirical loss or contrast function with an added penalty term roughly proportional to the number of parameters needed to describe the model divided by the number of observations. Most of the presented examples involve density or regression estimation settings, and the authors focus on the problem of estimating the unknown density or regression function. It is shown that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve. The connection between model selection via penalization and adaptation in the minimax sense is pointed out. Such illustrations of the introduced method as penalized maximum likelihood, projection or least squares estimation are provided. The models involve commonly used finite dimensional expansions such as piecewise polynomials with fixed or variable knots, trigonometric polynomials, wavelets, neural nets, and related nonlinear expansions defined by superposition of ridge functions.
      0 references
      sieves
      0 references
      adaptation
      0 references
      penalized maximum likelihood
      0 references
      projection
      0 references
      least squares estimation
      0 references

      Identifiers

      0 references
      0 references
      0 references
      0 references
      0 references
      0 references