Estimating a smooth monotone regression function (Q1175381): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
Import240304020342 (talk | contribs)
Set profile property.
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank

Revision as of 23:31, 4 March 2024

scientific article
Language Label Description Also known as
English
Estimating a smooth monotone regression function
scientific article

    Statements

    Estimating a smooth monotone regression function (English)
    0 references
    0 references
    25 June 1992
    0 references
    The problem of estimating a smooth monotone regression function \(m\) is studied. Two estimators \(m_{SI}\) and \(m_{IS}\) are compared. \(m_{SI}\) consists of two steps: (i) smoothing of the data by the kernel estimator, (ii) isotonisation of the data by the pool adjacent violator algorithm. The estimator \(m_{IS}\) is constructed by interchanging these two steps. The author considers the asymptotic behaviour of these estimators at a fixed point \(x_ 0\) where the function \(m\) is assumed to be strictly monotone and smooth. It is shown that if the bandwidth of the kernel estimator is chosen in the optimal order \(n^{-1/5}\), \(m_{SI}(x_ 0)\) and \(m_{IS}(x_ 0)\) are of order \(n^{-2/5}\) and that they are asymptotically equivalent in first order. But \(m_{SI}(x_ 0)- m_{IS}(x_ 0)\) is of the only slightly lower order \(n^{-8/15}.\) Theorem 3 of the paper deals with stochastic higher order expansions for \(m_{SI}(x_ 0)\) and \(m_{IS}(x_ 0)\). These expansions entail that \(m_{IS}(x_ 0)\) has always a smaller variance and a larger bias than \(m_{SI}(x_ 0)\). Furthermore it is shown that the kernel function \(K\) of the chosen kernel estimator mainly determines whether one should prefer the estimator \(m_{SI}\) or \(m_{IS}\). If the bandwidth of the kernel estimator \(m_ S\) is chosen such that the mean square error is asymptotically minimized, then \(m_{IS}(x_ 0)\) has asymptotically smaller mean square error than \(m_{SI}(x_ 0)\) if and only if \[ \int K^ 2(t)dt [\int t^ 2K(t)dt\int K'(t)^ 2dt ]^{-1} \] is smaller than a universal constant. For related literature see \textit{K. Cheng} and \textit{P. Lin}, Z. Wahrscheinlichkeitstheor. Verw. Geb. 57, 223-233 (1981; Zbl 0443.62029), and \textit{R. E. Barlow}, \textit{D. J. Bartholomew}, \textit{J. M. Bremner} and \textit{H. D. Brunk}, Statistical inference under order restrictions. The theory and application of isotonic regression (1972; Zbl 0246.62038).
    0 references
    nonparametric regression
    0 references
    isotonic regression
    0 references
    estimating a smooth monotone regression function
    0 references
    kernel estimator
    0 references
    pool adjacent violator algorithm
    0 references
    bandwidth
    0 references
    stochastic higher order expansions
    0 references
    mean squared error
    0 references

    Identifiers