On the asymptotic mean square error of \(L_ 1\) kernel estimates of smooth functions (Q1099021)

From MaRDI portal
scientific article
Language Label Description Also known as
English
On the asymptotic mean square error of \(L_ 1\) kernel estimates of smooth functions
scientific article

    Statements

    On the asymptotic mean square error of \(L_ 1\) kernel estimates of smooth functions (English)
    0 references
    1987
    0 references
    The author is concerned with the mean square error (MSE) of estimates of smooth functions. If \(\hat g(t)\) is the curve estimate of a given function g(t), then the MSE can be decomposed into variance and bias squared: \[ E(\hat g(t)-g(t))^ 2= var(\hat g(t))+ (E\hat g(t)-g(t))^ 2. \] Both variance and bias of a curve estimator are to be kept small. Keeping the bias small is equivalent to finding a deterministic approximation to the function g. The author uses the smoothing and approximation methods of convolution type, employing \(L_ 1\) kernel functions. The order k of a kernel function K is defined to be the least positive integer k such that \(B_ k(K):=\int K(x)x^ k dx\neq 0.\) To obtain a faster rate of convergence of \(C^{\infty}\) functions, the author suggests that one should let the order of the kernel increase as the number n of observations increases. In this setting, the author shows that for a broad class of functions, the optimal rate is \(O(\alpha_ n^{1/2}/n)\), where \((\alpha_ n/e)^{\alpha_ n}\sim n\).
    0 references
    0 references
    0 references
    0 references
    0 references
    mean square error
    0 references
    smooth functions
    0 references
    convolution
    0 references
    \(L_ 1\) kernel functions
    0 references
    optimal rate
    0 references
    0 references