Dimension reduction for kernel-assisted M-estimators with missing response at random (Q2317886)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Dimension reduction for kernel-assisted M-estimators with missing response at random
scientific article

    Statements

    Dimension reduction for kernel-assisted M-estimators with missing response at random (English)
    0 references
    0 references
    13 August 2019
    0 references
    Let \((X_i, Y_i,\delta_i)\), \(i=1, 2, \ldots, n\) be independent realizations from \((X,Y,\delta)\) where \(X\) is a \(d\)-dimensional fully observed covariate vector, \(Y\) is a univariate real-valued response having missing value and \(\delta\) is the non-missing indicator function. The random variable \(Y\) has distribution function \(F\) and \(\theta = \theta(F)\) is the parameter of interest. Let \(\theta_o\), the true value of the parameter \(\theta\), be the unique solution of the estimation equation \(E[\Phi(Y,\theta)]=0\), where \(\Phi(.,.)\) is a known function. A flexible class of estimators in such scenario are the M-estimators. The missing is assumed to occur at random (MAR). Three different M--estimators are considered in this paper, aiming unbiased and efficient estimation under MAR missings: inverse probability weighting (IPW), nonparametric imputation (MI) and nonparametric augmented inverse probability weighting (AIPW). The novelty here is that \(X\) is assumed to be large. In such situation the three above M--estimators lose efficiency and have their application scopes limited. To address the question, sufficient dimension reduction (SDR) technique is applied, namely, substitute \(X\) by \(S=BX\), where \(B\) is a \(p\times d\) deterministic matrix with \(p < d\), and having sufficiencies (\(Y\perp X/S\); \(\delta \perp X/S\); or both) preserved. That means the information contained in \(X\) about \((\delta, \delta\Phi(Y,\theta))\) is summarized by \(S\). However, the second condition \(\delta\Phi(Y,\theta)\perp X/S\) involves \(\theta\), which is unknown, resulting on \(\delta\Phi(Y,\theta))\) being unobservable. To bypass such difficulty the paper imposes a condition on \(\Phi(Y,\theta)\) so that \(S\) becomes invariant on \(\theta\). Under such condition, the paper shows that each component of nonparametric estimating equations behaves fine and each one of the M-estimators \(\hat{\theta}_l\) satisfies a Central Limit Theorem, where \(l\) stands for IPW, MI and AIPW. The manuscript presents simulation studies and a real data analysis in order to show the efficiency of its proposal.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    consistency and asymptotic normality
    0 references
    dimension reduction
    0 references
    kernel-assisted
    0 references
    M-estimators
    0 references
    missing at random
    0 references
    0 references
    0 references
    0 references
    0 references