Penalized likelihood-type estimators for generalized nonparametric regression (Q1914683)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Penalized likelihood-type estimators for generalized nonparametric regression
scientific article

    Statements

    Penalized likelihood-type estimators for generalized nonparametric regression (English)
    0 references
    0 references
    0 references
    5 August 1996
    0 references
    The method of maximum penalized likelihood has proved useful for a wide variety of nonparametric function estimation problems. In this method a smooth estimator of the parameter \(\theta\) is obtained by minimization of a ``penalized likelihood-type functional.'' To describe the method, suppose we are given data \((X_1, Y_1)\), \((X_2, Y_2), \dots, (X_n, Y_n)\). Then the penalized likelihood-type functional is \(l_{n \lambda} (\theta) = l_n (\theta) + (\lambda/2) J (\theta)\). The three ingredients of \(l_{n \lambda}\) are: (i) The smoothing parameter is \(\lambda > 0\). (ii) The likelihood component (which depends on the data) is \(l_n (\theta)\). We take it to be of the form \(l_n (\theta) = n^{-1} \sum_i \rho (Y_i \mid X_i, \theta)\) for some criterion function \(\rho\) which measures ``goodness of fit'' or ``fidelity to the data'', such as \(\rho (y \mid x, \theta) = [y - \theta (x)]^2\). (iii) The penalty functional is \(J (\theta)\). If \(\theta\) is real valued and \(x\) is one dimensional, then the most commonly used penalty functional is \(J (\theta) = \int [\theta^n (x)]^2 dx\), which gives rise to estimates which are cubic smoothing splines. Estimators of this type have been considered by a number of authors. The purpose of this paper is to develop first-order asymptotic approximations for such vector-valued nonparametric regression function estimators. The results are based on linear Taylor series expansions in infinite-dimensional spaces. An application of these approximations is to derive rates of convergence for the integrated squared error of the estimator and its derivatives. The approximations also provide insight into the estimation error which can be approximately decomposed into the sum of a bias (deterministic) term and a random term. The result on rates of convergence is stated in Section 2, along with assumptions that are used throughout the paper. In Section 3 we give two theorems that provide the details on the asymptotic linearization of the estimator. We expect that these results will prove useful for further analysis of such estimators, e.g., establishing Gaussian approximations and asymptotic properties of smoothing parameter selection methodologies.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    maximum penalized likelihood
    0 references
    nonparametric regression
    0 references
    multiple classification
    0 references
    smoothing splines
    0 references
    smoothing parameter
    0 references
    penalty functional
    0 references
    first-order asymptotic approximations
    0 references
    vector-valued nonparametric regression function estimators
    0 references
    linear Taylor series expansions
    0 references
    infinite-dimensional spaces
    0 references
    rates of convergence
    0 references
    integrated squared error
    0 references
    estimation error
    0 references
    Gaussian approximations
    0 references
    0 references