Estimation of a projection-pursuit type regression model (Q1175400)

From MaRDI portal
Revision as of 21:41, 19 March 2024 by Openalex240319060354 (talk | contribs) (Set OpenAlex properties.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Estimation of a projection-pursuit type regression model
scientific article

    Statements

    Estimation of a projection-pursuit type regression model (English)
    0 references
    0 references
    25 June 1992
    0 references
    A nonparametric estimate of the conditional mean value \(m_ 0(X)=E(Y\mid X=x)\) is proposed for \(X\) being assumed \(d\)-dimensional. The conditional mean value is assumed to have the form \[ m_ 0(X)=\mu_ 0+\sum^{K_ 0}_{j=1}\theta_ j(\beta^ T_ jX),\leqno (1) \] where \(\theta_ j\) are \(q\)-times continuously differentiable, bounded real functions with the \(q\)-th derivative being Lipschitz and \(\text{ang}(\{\beta_ 1,\ldots,\beta_{K_ 0}\})\geq M_ 0>0\), where \(\text{ang}(\{\beta_ 1,\ldots,\beta_{K_ 0})\) denotes the minimum among all angles between \(\beta_ i\) and the linear space spanned by \(\{\beta_ 1,\ldots,\beta_{K_ 0}\}\backslash\{\beta_ i\}\) for \(i=1,\ldots,K_ 0\) (for \(K_ 0=1\) it is defined as \(\pi/2\)). The estimator is considered in the form (\(1\leq k\leq d\)) \(m(x)=\mu+\sum^ k_{j=1}s_ j(\alpha_ jx)\), where \(\mu\) is a constant and each \(s_ j\) is a polynomial spline of degree \(q\) on \([-1,1]\) with equispaced knots of distance \(2/N\) and \(\text{ang}(\{\alpha_ 1,\ldots,\alpha_ k\})\geq M>0\). The density of \((Y,X)\) is assumed to be such that: i) The marginal density of \(X\) is bounded away from zero and infinity on a compact set containing the unit ball \(C\) in \(R^ r\); ii) \(\inf_ x\hbox{ var}(Y\mid X=x)>0.\) On the estimator \(m(x)\) the following constraints are imposed: i) There exists a positive integer \(\tau>(2d+5)(2p+1)/(2\gamma-1)\) (where \(p\in(q,q+1]\) and \(\gamma\in(1/2,1))\), and a positive constant \(c_ 3\) such that \[ \sup_ x E[| Y-m(x)|^{4\tau}\mid X=x]\leq c_ 3; \] ii) \(M\leq M_ 0\). Under these constraints the estimator is defined as the least squares estimator but only observations falling into the unit ball \(C\) are assumed, i.e. \[ \hat m_ n(x)=\arg\min\left\{\sum^ n_{i=1} [y_ i- m(x_ i)]^ 21_ C(x_ i)\right\}. \] The main result of the paper is then: \[ \lim_{n\rightarrow\infty}\sup_{\theta\in\Theta_{p,d}} P_ \theta \left\{n^{-1}\sum^ n_{i=1} [\hat m_ n(x_ i)-m_ 0(x_ i)]^ 2 1_ C(x_ i)\geq cn^{-2p/(2p+1)}\right\}=0, \] where \(\Theta_{p,d}\) denotes the collection of probability measures such that \(E(Y\mid X=x)\) has the form (1). As follows from this result, the imposed constraints imply that the rate of convergence of the estimators does not depend on the dimension \(d\).
    0 references
    conditional mean value
    0 references
    least squares estimator
    0 references
    rate of convergence
    0 references
    least squares polynomial spline
    0 references
    final prediction error criterion
    0 references
    additive models
    0 references
    nonparametric regression
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references