Weak convergence of \(k\)-NN density and regression estimators with varying \(k\) and applications (Q1102052): Difference between revisions
From MaRDI portal
Set OpenAlex properties. |
m rollbackEdits.php mass rollback Tag: Rollback |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1214/aos/1176350487 / rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2002225341 / rank | |||
Revision as of 07:27, 20 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Weak convergence of \(k\)-NN density and regression estimators with varying \(k\) and applications |
scientific article |
Statements
Weak convergence of \(k\)-NN density and regression estimators with varying \(k\) and applications (English)
0 references
1987
0 references
Let \((X_ i,Z_ i)\), \(i\geq 1\), be independent, two-dimensional random vectors distributed as (X,Z), where X has marginal distribution function F with density function f and where \(\mu (x)=E(Z| X=x)\) is the regression function of Z at \(X=x\). For fixed x, set \(Y_ i=| X_ i- x|\), let \(Y_{n1}\leq...\leq Y_{nn}\) denote the order statistics of \(Y_ 1,...,Y_ n\), and let \(Z_{n1},...,Z_{nn}\) be the induced order statistics in \((Y_ 1,Z_ 1),...,(Y_ n,Z_ n)\), i.e., \(Z_{ni}=Z_ j\) if \(Y_{ni}=Y_ j.\) The k-nearest neighbor (k-NN) estimator of f(x) corresponding to the uniform kernel, i.e., \(f_{nk}(x)=(k-1)/(2nY_{nk})\), and the k-NN estimator of \(\mu\) (x) with uniform weights, i.e., \(\mu_{nk}(x)=k^{- 1}\sum^{k}_{j=1}Z_{nj}\), for fixed x and k varying in an appropriate range, are transformed into continuous time stochastic processes by setting \[ T_ n(t)=f_{n,[n^{4/5}t]}(x),\quad S_ n(t)=\mu_{n,[n^{4/5}t]}(x),\quad 0<a\leq t\leq b<\infty. \] Under the usual second-order smoothness conditions, it is shown that the two processes \[ \{n^{2/5}[T_ n(t)-f(x)],\quad a\leq t\leq b\},\quad \{n^{2/5}[S_ n(t)-\mu (x)],\quad a\leq t\leq b\} \] have a common limiting structure as the sample size n tends to infinity. These results lead to asymptotic linear models in which BLUE's and suitably biased linear combinations of k-NN estimators with varying k are considered.
0 references
regression estimation
0 references
k-nearest neighbor estimators
0 references
density estimation
0 references
weak convergence
0 references
order statistics
0 references
induced order statistics
0 references
uniform kernel
0 references
uniform weights
0 references
continuous time stochastic processes
0 references
second- order smoothness conditions
0 references
asymptotic linear models
0 references
BLUE
0 references
biased linear combinations of k-NN estimators
0 references