On the choice of support of re-descending \(\psi\)-functions in linear models with asymmetric error distributions (Q1813434)

From MaRDI portal
scientific article
Language Label Description Also known as
English
On the choice of support of re-descending \(\psi\)-functions in linear models with asymmetric error distributions
scientific article

    Statements

    On the choice of support of re-descending \(\psi\)-functions in linear models with asymmetric error distributions (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    25 June 1992
    0 references
    M-estimation of the parameter vector in the usual linear regression model is considered. In particular, re-descending \(\psi\)-functions (and hence estimators with a re-descending influence function) are discussed. It is assumed that the error term distribution function \(F(y)\) equals the standard normal distribution contaminated with a distribution \(H(y)\) symmetric about zero, \(F(y)=(1-\varepsilon)\Phi(y)+\varepsilon H(y)\), if \(-a_ 0<y<a_ 0\), and completely unknown outside the interval \((-a_ 0,a_ 0)\). For known values on \(a_ 0\) and \(\varepsilon\) a particular re-descending \(\psi\)-function is known to have certain optimality properties. This \(\psi\)-function is linear in an interval around zero, \((-y_ 0,y_ 0)\), \(y_ 0\) being dependent on \(\varepsilon\) and \(a_ 0\), treating the observations as the least squares estimator. For larger arguments in the \(\psi\)-function, observations are downweighted, and for arguments outside the interval \((-a_ 0,a_ 0)\) observations receive the weight zero, i.e. the observations are treated as they were removed from the sample. The optimality properties of the estimator rely on the parameter \(a_ 0\) being known. If \(a_ 0\) is not known it has to be estimated. Further, if \(F\) is not known to be of the form mentioned above, but one wishes to use the above indicated \(\psi\)-function, we again have the problem of selecting an appropriate value on \(a_ 0\). In this paper, a method for selecting \(a_ 0\) (and \(y_ 0\)) is given, such that the asymptotic variance of the estimator of the regressionparameters is minimized.
    0 references
    0 references
    linear models
    0 references
    asymmetric errors
    0 references
    robust estimation
    0 references
    M-estimation
    0 references
    re- descending influence function
    0 references
    optimality properties
    0 references
    least squares estimator
    0 references
    asymptotic variance
    0 references
    0 references